File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/91/w91-0115_metho.xml
Size: 14,870 bytes
Last Modified: 2025-10-06 14:12:49
<?xml version="1.0" standalone="yes"?> <Paper uid="W91-0115"> <Title>Handling Pragmatic Information With A Reversible Architecture</Title> <Section position="3" start_page="119" end_page="120" type="metho"> <SectionTitle> 2. Existing Architectures For Re- </SectionTitle> <Paragraph position="0"> versible And Bi-directional Grammar null Shieber proposed a uniform architecture for sentence parsing and generation based on the Early type deduction mechanism (Shieber,1988). He parametrized the architecture with the initial condition, a priority function on lemmas, and a predicate expressing the concept of successful proof. Shieber remedied the inefficiency of the gen eration algorithm in his uniform architecture to introduce the concept of semantic head (Shieber et a1.,1989). Although Definite Clause Grammar (DCG) is reversible, its synthesis mode is inefficient. Dymetman and Strzalkowski approached the problem by compiling DCG into efficient analysis and synthesis programs (Dymetman et a1.,1988) (Strzalkowski,1990). The compilation is realized by changing goal ordering statically. Since Shieber's, Dymetman's and Strzalkowski's architectures are based on syntax deduction, they have difficulties in handling pragmatic information.</Paragraph> <Paragraph position="1"> Dependency propagation was suggested for parsing and generation in (Hasida et a1.,1987). His idea was developed using horn clauses similar to PROLOG. The word dependency indicates the states where variables are shared by constraints 3. Problem solving or parsing and generation can be modeled by resolving the dependencies. Dependency resolution is executed by fold/unfold transformations. Dependency propagation is a very elegant mechanism for problem solving, but it seems to be difficult to represent syntactic, semantic and pragmatic information with indiscrete constraints. In addition, to that, since dependency propagation is a kind of co-routine process, programs are very hard to debug and so constraints are tough to stipulate.</Paragraph> <Paragraph position="2"> Ait-Kaci's typed unification was applied to a reversible architecture (Emele and Zajac ,1990). All features in sign are sorted and placed in hierarchical structures. Parsing and generation can be executed by rewriting the</Paragraph> <Paragraph position="4"> features into their most specific forms. Their mechanism greatly depends on the hierarchy of information, but with information other than syntactic, especially pragmatic, it is hard to construct the hierarchy.</Paragraph> <Paragraph position="5"> 3. Introduction To the New Reversible Architecture 3.1. The Linguistic Objects We introduce the!linguistic object sign which incorporates syntactic, semantic and pragmatic information. Sign is represented by feature structures and consists of features phn, syn, sem and prag. Phn represents surface string information for words, phrases and sentences. Syn stands for syntactic information like the part of speech and sub-categorization information using HPSG. HPSG inherits the fundamental properties of Generalized: Phrase Structure Grammar(GPSG). That is, HPSG makes use of a set of feature+value pairs, feature constraints and unification to stipulate grammar instead of rewriting rules for terminal and nonterminal symbols. The major difference between HPSGi and GPSG is that subcategorizafion information is stored in lexical entries, instead of being stored in grammar rules (Pollard et a1.,1987,1990). Sere denotes semantic information or logical forms. Logical forms are expressed by the semantic representation l~guage proposed by Gazdar (Gazdar et a1,1989). Since the language is a feature representation of Woods' representation (Woods,1978), it has the advantages that it can represent quantifier scope ambiguities. It consistsi of the features qnt, var, rest, and body : qnt is for quantifier expressions; var is for variables bound by the qnt; rest is for restrictions of the var ; while body represents flae predication of the logical form. Prag deliiaeates pragmatic information. Pragmatic conditions are not necessarily true but are held a s assumptions. Uniqueness and novelty conditions in cleft sentences are instances of the conditions.</Paragraph> <Paragraph position="6"> m phn: &quot;It was the girl that a boy loved&quot; syn: pos: verb subcat:subc(\[\]) sem: qnt: indefinite var: X rest: argO: X pred: BOY body: qnt: definite var: Y rest: argO: Y pred: GIRL body: argO: X argl: Y pred: LOVED prag: \[novel(Y), unique(Y)\]</Paragraph> </Section> <Section position="4" start_page="120" end_page="121" type="metho"> <SectionTitle> -- Sign </SectionTitle> <Paragraph position="0"> cates that surface string is &quot;It was a girl that the boy loved&quot;. Syn represents that: 1) the part of speech is verb; 2) subcategorization information is satisfied. Sem shows that: 1) the quantifier at the top level is indefinite; 2) the property of the variable X is a boy; 3) the property of the variable Y bounded by the quantifier definite is a girl; 4) the boy loved the girl. Prag mentions that the variable Y is constrained with uniquness and novelty conditions.</Paragraph> <Section position="1" start_page="120" end_page="121" type="sub_section"> <SectionTitle> 3.2. The Plan Representation of Linguistic Objects </SectionTitle> <Paragraph position="0"> To handle syntactic, semantic, and pragmatic information, our generator represents them as plans. Plans are composed of preconditions, constraints, plan expansion, and effects. Preconditions include pragmatic information which are the criteria needed to select a plan. Constraints include syntactic conditions such as the head feature principle and conditions on surface strings. Plan expansion contains sub-semantic expressions for effects, which are complete semantic expressions. Constraints and preconditions are similar, but differ in that the former must be satisfied, but the latter are retained as as- null sumpfions if not satisfied.</Paragraph> <Paragraph position="1"> Figure 3 describes a plan relating to the semantic information LOVED. No preconditions exists because the expression &quot;loved&quot; has no pragmatic information. Constraints indicate that: 1) the part of speech equals verb; 2) the subcategofization information is subc(\[Sbj,Obj\]); 3) The sem features of precond: \[\] const: (Sign:syn:pos = verb),</Paragraph> <Paragraph position="3"> Sbj and Obj are semantic arguments of predicate LOVED; 4) the surface string is &quot;loved&quot;. There is no plan expansion because lexical information does not need to be expanded.</Paragraph> <Paragraph position="4"> Effects mention semantic expression of LOVED.</Paragraph> </Section> <Section position="2" start_page="121" end_page="121" type="sub_section"> <SectionTitle> 3.3. An Argumentation System For Planning </SectionTitle> <Paragraph position="0"> A plan recognition scheme, named the argumentation system, was proposed by Konolige and Pollack (Konolige and Pollack ,1989). It can defeasibly reason about belief and intention ascription 4, and can process preferences over candidate ascriptions. The framework is so general and powerful that it can perform other processes other than belief and intention ascription. For example, Shimazu has shown that it can model parsing mechanism The argumentation system consists of arguments. An argument is a relation between</Paragraph> </Section> </Section> <Section position="5" start_page="121" end_page="123" type="metho"> <SectionTitle> 4 Defeasible reasoning and approximate </SectionTitle> <Paragraph position="0"> reasoning are very similar, but differ in that: the former addresses the result after rule application; the latter considers just rule application.</Paragraph> <Paragraph position="1"> a set of propositions (the premises of the argument), and another set of propositions (the conclusion of the argument) (Konolige and Pollack,1989: 926). The system has a language containing the following operators: t(P) which indicates the truth of the proposition P; beI(A,PF) which mentions that agent A believes plan fragment PF; int(A,PF) which shows that agent A intends plan fragment PF; exp(A,P) which means that agent A expects proposition P to be true; and by(Actexpl,Actexp2,Pexp) which signifies the complex plan fragment which consists of the action expression Actexp2, by doing action expression Actexpl while propositional expression Pexp is true 5. The arguments to the operators, action expressions inform(S,H,Sem) and utter(S,H,Str), are introduced to mimic informing and uttering activities: the former is designated such that speaker S informs hearer H about semantic content Sem; the latter indicates speaker S utters string Str to hearer H.</Paragraph> <Paragraph position="2"> Plan expansion, effects and constraints mentioned in subsection 3.2 correspond to Actexpl, Actexp2 and Pexp, respectively.</Paragraph> <Paragraph position="3"> To represent the difference between preconditions and constraints, the operator by is revised to include preconditions as the fourth argument. Thus, the new operator by(Actexp 1,Actexp2,Pexp 1,Pexp2) is defined as the complex plan fragment, consisting of doing Actexp2 (effect(s)), by doing Actexpl (plan expansion) while Pexpl (constraints) is true, and Pexp2 (precondition(s)) is true or held as assumptions. The plan in figure 2 was redefined by using axiom (1) 6.</Paragraph> <Paragraph position="4"> Axiom (2) shows another example cor5Action expressions are formed from an action name and parameters; Propositional expressions are formed from a property name and parameters. ~ecause of space limitations, abbreviations are used as necessary. For example, Pos is taken to mean the value of the features of part of speech of syntactic information of sign.</Paragraph> <Paragraph position="6"> responding to the context free grammar for a cleft sentence.</Paragraph> <Paragraph position="7"> pred: Pred.</Paragraph> <Paragraph position="8"> Plan expansion and effects indicate that if speaker S wants to inform hearer H about LF, the speaker should inform the hearer about LF1 and LF2, while observing constraints 1) -4). Constraints state that: 1) the part of speech of Sign equals one of Sign2; 2) Subcategorized and slash information of Sign is nil; 3) Subcategorized information of Sign2 equals nil; 4) Slash information of Sign2 is equivalent to Obj; 4) a surface string consists of the string &quot;It was&quot;, the string relating to Signl, the string &quot;that&quot; and the string relating to Sign2. Other axioms which are necessary for explaining parsing and generation examples are listed in the appendix.</Paragraph> <Section position="1" start_page="122" end_page="123" type="sub_section"> <SectionTitle> 4.1. Sentence Parsing </SectionTitle> <Paragraph position="0"> Parsing techniques were simulated using an argumentation system in (Shimazu,1990).</Paragraph> <Paragraph position="1"> Since he faithfully tried to model existing techniques, many parsing oriented terms such as complete and addition were introduced. This seems to be the cause of the difficulty he experienced in integrating parsing with other processes.</Paragraph> <Paragraph position="2"> Since syntactic, semantic and pragmatic information can be represented with the new by relation, arguments (a) and (b) enable us to simulate parsing: (a) says that true propositions can be ascribed to a speaker's belief; (b) states that, if a speaker is assumed to believe that E is an effect of performing plan expansion PE, while constraint C is true and precondition PR is assumed to be true H, then it is plausible that his reason for doing PE is his intention to do E.</Paragraph> <Paragraph position="3"> Parsing is executed as follows: first, axioms whose constraints match an input word are collected; second, the axiom which satisfies the constraint is selected (preconditions are asserted); third, an effect, or semantic information is derived using an instance of argument (b); fourth, another instance of the argument is applied to the effect and the effect which was already derived to obtain a new effect. If the application cannot proceed further, a new word is analyzed; Lastly, ff all words in a sentence are analyzed successfully, the execution is complete.</Paragraph> <Paragraph position="4"> Parsing is exactly the same as plan recognition in the sense of (Konolige and Pollack, 1989:925): Plan recognition is essentially a &quot;bottom-up&quot; recognition process, with global coherence used mostly as an evaluative measure, to eliminate ambiguous plan fragments that emerge.from local cues.</Paragraph> <Paragraph position="5"> Maximizing head elements can realize right association and minimal attachment, but handling semantic ambiguities, that is to clarify global coherence, is a further issue.</Paragraph> </Section> <Section position="2" start_page="123" end_page="123" type="sub_section"> <SectionTitle> 4.2. Sentence Generation </SectionTitle> <Paragraph position="0"> Generation can be simulated using arguments (a) and (c). (c) says that, if a speaker believes that E is an effect of performing plan expansion PE, while constraint C is true and pre-condition PR is assumed to be true, and he intends to do E, then it is plausible that his intention PE is to achieve E.</Paragraph> <Paragraph position="1"> HAction expression expl(A,P) means that agent A expects proposition P to be assumed to be true if not fully satisfied.</Paragraph> <Paragraph position="3"> Generation is executed in a similar way to parsing except that axioms are collected using semantic information and the result is a string 12. Figure 4 describes the generation process. The input linguistic object is equivalent to the object in Figure 2 whose surface string infoiTnation is parametefized. The generation result is the input object with the instafiated surface string, that is Figure 1.</Paragraph> <Paragraph position="4"> In Figure 4, axiom (2) creates subgoals related to the variable Y and others (corresponding to (2) and (3)) because the semantic and pragmatic information of the input equals the effect and preconditions of the axiom. As the head features propagate to the linguistic object (2), execution addressing object (2) is preferred. The axiom (6) constnacts subgoals by referring to the objects whose semantic information is related to the features qnt, vat, rest and the logical form concerned with the bound variable X. The head feature preference makes the generator execute axioms about object (4). This results in the axiom (1) of lexical information. Similar to the above process, the remaining sub-goals (5) and (2) are executed. Finally, the surface string &quot;It was a girl that the boy loved&quot; is obtained.</Paragraph> </Section> </Section> class="xml-element"></Paper>