File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/89/p89-1005_metho.xml

Size: 30,909 bytes

Last Modified: 2025-10-06 14:12:23

<?xml version="1.0" standalone="yes"?>
<Paper uid="P89-1005">
  <Title>Abstract Unification-Based Semantic Interpretation</Title>
  <Section position="4" start_page="34" end_page="34" type="metho">
    <SectionTitle>
2 Functional Application vs.
Unification
</SectionTitle>
    <Paragraph position="0"> Example (2) is typical of the kind of semantic rules used in the standard approach to semantic interpretation in the tradition established by Pdchard Montague (1974) (Dowty, Wall, and Peters, 1981).</Paragraph>
    <Paragraph position="1"> In this approach, the interpretation of a complex constituent is the result of the functional application of the interpretation of one of the daughter constituents to the interpretation of the others.</Paragraph>
    <Paragraph position="2"> A problem with this approach is that if, in a rule like (2), the verb phrase itself is semantically complex, as it usually is, a lambda expression has to be used to express the verb-phrase interpretation, and then a lambda reduction must be applied to express the sentence interpretation in its simplest form (Dowry, Wall, and Peters, 1981, pp. 98-111). To use (2) to specify the interpretation of the sentence John likes Mary, the logical form for John could simply be john, but the logical form for likes Mary would have to be something like X\like(X,mary). \[The notation Var\Bocly for lambda expressions is borrowed from Lambda Prolog (Miller and Nadathur, 1988).\] The logical form for the whole sentence would then be apply(Xklike(X,mary),john), which must be reduced to yield the simplified logical form like(jobn,m~y).</Paragraph>
    <Paragraph position="3"> Moreover, lambda expressions and the ensuing reductions would have to be introduced at many intermediate stages if we wanted to produce simplified logical forms for the interpretations of complex constituents such as verb phrases. If we want to accommodate modal auxiliaries, as in John might like Mary, we have to make sure that the verb phrase might like Mary receives the same type of interpretation as like(s) Mary in order to combine properly with the interpretation of the subject. If we try to maintain functional application as the only method of semantic composition, then it seems that the simplest logical form we can come up with for might like Mary is produced by the following rule:  (3) sem(vp_aux_vp.</Paragraph>
    <Paragraph position="4"> \[(Xkapply (Aux, apply (Vp, X) ), vp: \[\] ),</Paragraph>
    <Paragraph position="6"> Applying this rule to the simplest plausible logical forms for migM and like Mary would produce the following logical form for might like Mary: X\apply(might, (apply(Y\like(Y,mary),X))) which must be reduced to obtain the simpler expression X\might (like (X ,mary) ). When this expression is used in the sentence-level rule, another reduction is required to eliminate the remaining lambda expression. The part of the reduction step that gets rid of the apply functors is to some extent an artifact of the way we have chosen to encode these expressions as Prolog terms, but the lambda reductions are not. They are inherent in the approach, and normally each rule will introduce at least one lambda expression that needs to be reduced away.</Paragraph>
    <Paragraph position="7"> It is, of course, possible to add a lambdareduction step to the interpreter for the semantic rules, but it is both simpler and more efficient to use the feature system and unification to do explicitly what lambda expressions and lambda reduction do implicitly--assign a value to a variable embedded in a logical-form expression. According to this approach, instead of the logical form for a verb phrase being a logical predicate, it is the same as the logical form of an entire sentence, but with a variable as the subject argument of the verb and a feature on the verb phrase having that same variable as its value. The sentence interpretation rule can thus be expressed as  which says that the logical form of the sentence is just the logical form of the verb phrase with the subject argument of the verb phrase unified with the logical form of the subject noun phrase. If the verb phrase likes Mary is assigned the logicalform/category-expression pair (like(X,mary),vp:\[subjval=X\]), then the application of this rule will unify the logical form of the subject noun phrase, say john, directly with the variable X in like(X,mary) to immediately produce a sentence constituent with the logical form like(jotm,mary).</Paragraph>
    <Paragraph position="8"> Modal auxiliaries can be handled equally easily by a rule such as</Paragraph>
    <Paragraph position="10"> If might is assigned the logical-form/categoryexpression pair (might (A), aux : \[argval=A\] ), then applying this rule to interpret the verb phrase might like Mary will unify A in mighl;(A) with like(X,mary) to produce a constituent with the logical-form/category-expression pair (migh~ (like, X, mary), vp : \[subj val=X\] ).</Paragraph>
    <Paragraph position="11"> which functions in the sentence-interpretation rule in exactly the same way as the logicalform/category-expression pair for like Mary.</Paragraph>
  </Section>
  <Section position="5" start_page="34" end_page="35" type="metho">
    <SectionTitle>
3 Are Lambda Expressions
</SectionTitle>
    <Paragraph position="0"> Ever Necessary? The approach presented above for eliminating tile explicit use of lambda expressions and lambda reductions is quite general, but it does not replace all possible uses of lambda expressions in semantic interpretation. Consider the sentence John and Bill like Mary. The simplest logical form for the distributive reading of this sentence would be and(like(john,mary) ,like(bill ,mary) ).</Paragraph>
    <Paragraph position="1"> If the verb phrase is assigned the logicalform/category-expression pair (like (X, mary), vp : \[subj val=X\] ), as we have suggested, then we have a problem: Only one of john or bill can be directly unified with X, but to produce the desired logical form, we seem to need two instances of like(X,mary), with two different instantiations of X.</Paragraph>
    <Paragraph position="2"> Another problem arises when a constituent that normally functions as a predicate is used as an argument instead. Common nouns, for example, are normally used to make direct predications, so a noun like senator might be assigned the logicalform/category-expression pair (S enamor (X), nbar: \[argval=X\] ) according to the pattern we have been following.</Paragraph>
    <Paragraph position="3"> (Note that we do not have &amp;quot;noun&amp;quot; as a syntactic category; rather, a common noun is simply treated as a lexical &amp;quot;n-bar.&amp;quot;) It is widely recognized, however, that there are &amp;quot;intensional&amp;quot; adjectives and adjective phrases, such as former, that need to be treated as higher-level predicates or operators on predicates, so that in an expression like former  senator, the noun senator is not involved in directly making a predication, but instead functions as an argument to former. We can see that this must be the case, from the observation that a former senator is no longer a senator. The logical form we have assigned to senator, however, is not literally that of a predicate, however, but rather of a complete formula with a free variable. We therefore need some means to transform this formula with its free variable into an explicit predicate to be an argument of former. The introduction of lambda expressions provides the solution to this problem, because the transformation we require is exactly what is accomplished by lambda abstraction. The following rule shows how this can be carried out in practice:</Paragraph>
    <Paragraph position="5"> This rule requires the logical-form/categoryexpression pair assigned to an intensional adjective phrase to be something like (formerCP,Y=), adjp: \[~ype=intensional, argvall--P, argvalg=Y\] ), where former(P,Y) means that Y is a former P. The daughter nbar is required to be as previously supposed. The rule creates a lambda expression, by unifying the bound variable with the argument of the daughter nbar and making the logical form of the daughter nbar the body of the lambda expression, and unifies the lambda expression with the first argument of the adjp. The second argument of the adjp becomes-the argument of the mother nbar. Applying this rule to former senator will thus produce a constituent with the logicalform/category-expression pair</Paragraph>
    <Paragraph position="7"> nbar: \[argval=Y\] ).</Paragraph>
    <Paragraph position="8"> This solution to the second problem also solves the first problem. Even in the standard lambda-calculus-based approach, the only way in which multiple instances of a predicate expression applied to different arguments can arise from a single source is for the predicate expression to appear as an argument to some other expression that contains multiple instances of that argument.</Paragraph>
    <Paragraph position="9"> Since our approach requires turning a predicate into an explicit lambda expression if it is used as an argument, by the time we need multiple instances of the predicate, it is already in the form of a lambda expression. We can show how this works by encoding a Montagovian (Dowty, Wall, Peters, 1981) treatment of conjoined sub-ject noun phrases within our approach. The major feature of this treatment is that noun phrases act as higher-order predicates of verb phrases, rather than the other way around as in the simpler rules presented in Sections 1 and 2. In the Montagovian treatment, a proper noun such as JoAn is given an interpretation equivalent to P\P(jotm), so that when we apply it to a predicate like ran in interpreting John runs we get something like apply(P\P(john),run) which reduces to run(john). With this in mind, consider the following two rules for the interpretation of  The first of these rules gives a Montagovian treatment of conjoined noun phrases, and the second gives a Montagovian treatment of simple declarative sentences. Both of these rules assume that a proper noun such as John would have a logicai-form/category-expression pair like (apply(P, john) .np: \[argval=P\] ).</Paragraph>
    <Paragraph position="10"> In (7) it is assumed that the conjunction and would have a logicai-form/category-expression pair like</Paragraph>
    <Paragraph position="12"> In (7) the logical forms of the two conjoined daughter nps are unified with the two arguments of the conjunction, and the arguments of the daughter nps are unified with each other and with the single argument of the mother np. Thus applying (7) to interpret John and Bill yields a constituent with the logical-form/category-expression pair  (and(apply(P, j ohm), apply (P, bill) ), np: \[argval=P\] ).</Paragraph>
    <Paragraph position="13"> In (8) an explicit lambda expression is constructed out of the logical form of the vp daughter in the same way a lambda expression was constructed in (6), and this lambda expression is unified with the argument of the subject np. For the sentence John and Bill like Mary, this would produce the logical form and (apply (X\like (X,mary), j ohm), apply(X\like (X,mary) ,bill)), which can be reduced to and(like (john,mary) ,like(bill,mary)).</Paragraph>
  </Section>
  <Section position="6" start_page="35" end_page="38" type="metho">
    <SectionTitle>
4 Theoretical Foundations of
Unification-Based Seman-
</SectionTitle>
    <Paragraph position="0"> tics The examples presented above ought to be convincing that a unification-based formalism can be a powerful tool for specifying the interpretation of natural-language expressions. What may not be clear is whether there is any reasonable theoretical foundation for this approach, or whether it is just so much unprincipled &amp;quot;feature hacking.&amp;quot; The informal explanations we have provided of how particular rules work, stated in terms of unifying the logical form for constituent X with the appropriate variable in the logical form for constituent Y, may suggest that the latter is the case. If no constraints are placed on how such a formalism is used, it is certainly possible to apply it in ways that have no basis in any well-founded semantic theory. Nevertheless, it is possible to place restrictions on the formalism to ensure that the rules we write have a sound theoretical basis, while still permitting the sorts of rules that seem to be needed to specify the semantic interpretation of natural languages.</Paragraph>
    <Paragraph position="1"> The main question that arises in this regard is whether the semantic rules specify the interpretation of a natural-language expression in a compositional fashion. That is, does every rule assign to a mother constituent a well-defined interpretation that depends solely on the interpretations of the daughter constituents? If the interpretation of a constituent is taken to be just the interpretation of its logical-form expression, the answer is clearly &amp;quot;no.&amp;quot; In our formalism the logical-form expression assigned to a mother constituent depends on both the logical-form expressions and the category expressions assigned to its daughters.</Paragraph>
    <Paragraph position="2"> As long as both category expressions and logical-form expressions have a theoretically sound basis, however, there is no reason that both should not be taken into account in a semantic theory; so, we will define the interpretation of a constituent based on both its category and its logical form.</Paragraph>
    <Paragraph position="3"> Taking the notion of interpretation in this way, we will explain how our approach can be made to preserve compositionality. First, we will show how to give a well-defined interpretation to every constituent; then, we will sketch the sort of restrictions on the formalism one needs to guarantee that any interpretation-preserving substitution for a daughter constituent also preserves the interpretation of the mother constituent.</Paragraph>
    <Paragraph position="4"> The main problem in giving a well-defined interpretation to every constituent is how to interpret a constituent whose logical-form expression contains free variables that also appear in feature values in the constituent's category expression. Recall the rule we gave for combining auxiliaries with verb phrases:</Paragraph>
    <Paragraph position="6"> This rule accepts daughter constituents having logical-form/category-expression pairs such as (migh~ (A), attz : \[argval=A\] ) and (like (X, mary), vp: \[subj val=X\] ) to produce a mother constituent having the logical-form~category-expression pair (migh~ (like, X, mary), vp: \[subj val=X\]. Each of these pairs has a logical-form expression containing a free variable that also occurs as a feature value in its category expression. The simplest way to deal with logical-form/category-expression pairs such as these is to regard them in the way that syntactic-category expressions in unification grammar can be regarded--as abbreviations for the set of all their well-formed fully instantiated substitution instances.</Paragraph>
    <Paragraph position="7"> To establish some terminology, we will say that a logical-form/category-expression pair containing no free-variable occurrences has a &amp;quot;basic interpretation,&amp;quot; which is simply the ordered pair consisting of the interpretation of the logical-form expression and the interpretation of the category  expression. Since there are no free variables involved, basic interpretations should be unproblematic. The logical-form expression will simply be a closed well-formed expression of some ordinary logical language, and its interpretation will be whatever the usual interpretation of that expression is in the relevant logic. The category expression can be taken to denote a fully instantiated grammatical category of the sort typically found in unification grammars. The only unusual prop-erty of this category is that some of its features may have logical-form interpretations as values, but, as these will always be interpretations of expressions containing no free-variable occurrences, they will always be well defined.</Paragraph>
    <Paragraph position="8"> Next, we define the interpretation of an arbitrary logical-form/category-expression pair to be the set of basic interpretations of all its well-formed substitution instances that contain no free-variable occurrences. For example, the interpretation of a constituent with the logicalform/category-expression pair (might (like, X, mary), vp: \[subj val=X\] ) would consist of a set containing the basic interpretations of such pairs as (might (like, john, mary).</Paragraph>
    <Paragraph position="10"> and so forth.</Paragraph>
    <Paragraph position="11"> This provides well-defined interpretation for every constituent, so we can now consider what restrictions we can place on the formalism to guarantee that any interpretation-preserving substitution for a daughter constituent also preserves the interpretation of its mother constituent. The first restriction we need rules out constituents that would have degenerate interpretations: No semantic rule or semantic lexical specification may contain both free and bound occurrences of the same variable in a logicai-form/category-expression pair.</Paragraph>
    <Paragraph position="12"> To see why this restriction is needed, consider the logical-form/category-expression pair (every (X ,man(X), die(X) ), np: \[boundvar=X, bodyval=die (X) \] ).</Paragraph>
    <Paragraph position="13"> which might be the substitution instance of a daughter constituent that would be selected in a rule that combines noun phrases with verb phrases. The problem with such a pair is  that it does not have any well-formed substitution instances that contain no free-variable occurrences. The variable X must be left uninstantiated in order for the logical-form expression every(X,man(X),die(X)) to be well formed, but this requires a free occurrence of X in np: \[boundvar=X, bodyval=die (X) \]. Thus this pair will be assigned the empty set as its interpretation. Since any logical-form/categoryexpression pair that contains both free and bound occurrences of the same variable will receive this degenerate interpretation, any other such pair could be substituted for this one without altering the interpretations of the daughter constituent substitution instances that determine the interpretation of the mother constituent. It is clear that this would normally lead to gross violations of compositionality, since the daughter substitution instances selected for the noun phrases every man, no woman, and some dog would all receive the same degenerate interpretation under this scheme.</Paragraph>
    <Paragraph position="14"> This restriction may appear to be so constraining as to rule out certain potentially useful ways of writing semantic rules, but in fact it is generally possible to rewrite such rules in ways that do not violate the restiction. For example, in place of the sort of logical-form/category-expression pair we have just ruled out, we can fairly easily rewrite the relevant rules to select daughter substitution instances such as (every (X ,man(X), die (X)), np: \[bodypred=X\die (X)\]), which does not violate the constraint and has a completely straightforward interpretation.</Paragraph>
    <Paragraph position="15"> Having ruled out constituents with degenerate interpretations, the principal remaining problem is how to exclude rules that depend on properties of logical-form expressions over and above their interpretations. For example, suppose that the order of conjuncts does not affect the interpretation of a logical conjunction, according to the interpretation of the logical-form language. That is, and(p,c 1) would have the same interpretation as and(q,p). The potential problem that this raises is that we might write a semantic rule that contains both a logicai-form expression like and(P, Q) in the specification of a daughter constituent and the variable P in the logical form of the mother constituent. This would be a violation of compositionality, because the interpretation of the mother would depend on the interpretation of the left conjunct of a conjunction, even though, according to the semantics of the logical-form language, it makes no sense to distinguish the left and right conjuncts. If order of conjunction does not affect meaning, we ought to be able to substitute a daughter with the logical form and(q,p) for one with the logical form and(p,q) without affecting the interpretation assigned to the mother, but clearly, in this case, the interpretation of the mother would be affected.</Paragraph>
    <Paragraph position="16"> It is not clear that there is any uniquely optimal set of restrictions that guarantees that such violations of compositionality cannot occur. Indeed, since unification formalisms in general have Turing machine power, it is quite likely that there is no computable characterization of all and only the sets of semantic rules that are compositional. Nevertheless, one can describe sets of restrictions that do guarantee compositionality, and which seem to provide enough power to express the sorts of semantic rules we need to use to specify the semantics of natural languages. One fairly natural way of restricting the formalism to guarantee compositionality is to set things up so that unifications involving logical-form expressions are generally made against variables, so that it is possible neither to extract subparts of logical-form expressions nor to filter on the syntactic form of logical-form expressions. The only exception to this restriction that seems to be required in practice is to allow for rules that assemble and disassemble lambda expressions with respect to their bodies and bound variables. So long as no extraction from inside the body of a lambda expression is allowed, however, compositionality is preserved.</Paragraph>
    <Paragraph position="17"> It is possible to define a set of restrictions on the form of semantic rules that guarantee that no rule extracts subparts (other than the body or bound variable of a lambda expression) of a logical-form expression or filters on the syntactic form of a logical-form expression. The statement of these restrictions is straightforward, but rather long and tedious, so we omit the details here. We will simply note that none of the sample rules presented in this paper involve any such extraction or filtering.</Paragraph>
  </Section>
  <Section position="7" start_page="38" end_page="40" type="metho">
    <SectionTitle>
5 The Semantics of Long-
</SectionTitle>
    <Paragraph position="0"/>
    <Section position="1" start_page="38" end_page="40" type="sub_section">
      <SectionTitle>
Distance Dependencies
</SectionTitle>
      <Paragraph position="0"> The main difficulty that arises in formulating semantic-interpretation rules is that constituents frequently appear syntactically in places that do not directly reflect their semantic role. Semantically, the subject of a sentence is one of the arguments of the verb, so it would be much easier to produce logical forms for sentences if the subject were part of the verb phrase. The use of features such as subjval, in effect, provides a mechanism for taking the interpretation of the subject from the place where it occurs and inserting it into the verb phrase interpretation where it &amp;quot;logically&amp;quot; belongs. null The way features can be manipulated to accomplish this is particularly striking in the case of the long-distance dependencies, such as those in WHquestions. For the sentence Which girl might John like.C, the simplest plausible logical form would be something like which(X, girl (X), migh~ (like (john, X) ), where the question-forming operator which is treated as a generalized quantifier whose &amp;quot;arguments&amp;quot; consist of a bound variable, a restriction, and a body.</Paragraph>
      <Paragraph position="1"> The problem is how to get the variable X to link the part of the logical form that comes from the fronted interrogative noun phrase with the argument of like that corresponds to the noun phrase gap at the end of the verb phrase. To solve this problem, we can use a technique called &amp;quot;gapthreading.&amp;quot; This technique was introduced in unification grammar to describe the syntax of constructions with long-distance dependencies (Karttunnen, 1986) (Pereira and Sheiber, 1987, pp. 125129), but it works equally well for specifying their semantics. The basic idea is to use a pair of features, gapvalsin and gapvalsou% to encode a list of semantic &amp;quot;gap fillers&amp;quot; to be used as the semantic interpretations of syntactic gaps, and to thread that list along to the points where the gaps occur.</Paragraph>
      <Paragraph position="2"> These gap fillers are often just the bound variables introduced by the constructions that permit gaps to occur.</Paragraph>
      <Paragraph position="3"> The following semantic rules illustrate how this mechanism works:</Paragraph>
      <Paragraph position="5"> This is the semantic-interpretation rule for a WH-question with a long-distance dependency. The syntactic form of such a sentence is an interrogative noun phrase followed by a yes/no question with a noun phrase gap. This rule expects the  interrogative noun phrase which girl to have a logical-form/category-expression pair such as (which(X, girl (X), Bodyval), np: \[type=int errog, bodypred=X\Bodyval\] ).</Paragraph>
      <Paragraph position="6"> The feature bodypred holds a lambda expression whose body and bound variable are unified respectively with the body and the bound variable of the which expression. In (9) the body of this lambda expression is unified with the logical form of the embedded yes/no question, and the gapvalsin feature is set to be a list containing the bound variable of the lambda expression. This list is actually used as a stack, to accomodate multiply nested filler-gap dependencies. Since this form of question cannot be embedded in other constructions, however, we know that in this case there will be no other gap-fillers already on the list.</Paragraph>
      <Paragraph position="7"> This is the rule that provides the logical form for empty noun phrases: (I0) sem(empl:y_np, \[ (Val, np: \[gapvalsin= \[Val\[ ValRest\], gapvalsout=ValRes~\] )\] ).</Paragraph>
      <Paragraph position="8"> Notice that it has a mother category, but no daughter categories. The rule simply says that the logical form of an empty np is the first element on its list of semantic gap-fillers, and that this element is &amp;quot;popped&amp;quot; from the gap-filler list. That is, the gapvalsoul: feature takes as its value the tail of the value of the gapvalsin feature.</Paragraph>
      <Paragraph position="9"> We now show two rules that illustrate how a list of gap-fillers is passed along to the points where the gaps they fill occur.</Paragraph>
      <Paragraph position="10"> (II) sem(vp_aux_vp, \[ (Aux, vp: \[subj val=S, gapvals in= In, gapvalsouz=Out\] ), (Aux, aux: \[argvalfVp\] ), (Vp, vp: \[subj val=S, gapvalsin= In, gapvalsou~=Out\] ) \] ).</Paragraph>
      <Paragraph position="11"> This semantic rule for verb phrases formed by an auxilliary followed by a verb phrase illustrates the typical use of the gap features to &amp;quot;thread&amp;quot; the list of gap fillers through the syntactic structure of the sentence to the points where they are needed. An auxiliary verb cannot be or contain a WH-type gap, so there are no gap features on the category aux. Thus the gap features on the mother vp are simply unified with the corresponding features on the daughter vp.</Paragraph>
      <Paragraph position="12"> A more complex case is illustrated by the following rule:  This is a semantic rule for verb phrases that consist of a verb phrase and a prepositional phrase. Since WH-gaps can occur in either verb phrases or prepositional phrases, the rule threads the list carried by the gapvalsin feature of the mother vp first through the daughter vp and then through the daughter pp. This is done by unifying the mother vp's gapvalsin feature with the daughter vp's gapvalsin feature, the daughter vp's gapvalsout feature with the daughter pp's gapvalsin feature, and finally the daughter pp's gapvalsouz feature with the mother vp's gapvalsout feature. Since a gap-filler is removed from the list once it has been &amp;quot;consumed&amp;quot; by a gap, this way of threading ensures that fillers and gaps will be matched in a last-in-first-out fashion, which seems to be the general pattern for English sentences with multiple filler-gap dependencies. (This does not handle &amp;quot;parasitic gap&amp;quot; constructions, but these are very rare and at present there seems to be no really convincing linguistic account of when such constructions can be used.) Taken altogether, these rules push the quantified variable of the interrogative noun phrase onto the list of gap values encoded in the feature gapvalsin on the embedded yes/no question. The list of gap values gets passed along by the gap-threading mechanism, until the empty-nounphrase rule pops the variable off the gap values list and uses it as the logical form of the noun phrase gap. Then the entire logical form for the embedded yes/no question is unified with the body of the logical form for the interrogative noun phrase, producing the desired logical form for the whole sentence.</Paragraph>
      <Paragraph position="13"> This treatment of the semantics of long-distance dependencies provides us with an answer to the question of the relative expressive power of our approach compared with the conventional lambda-calculus-based approach. We know that the unification-based approach is at least as powerful as the conventional approach, because the the conventional approach can be embedded directly in it, as illustrated by the examples in Section 3. What about the other way around? Many unification-based rules have direct lambda-calculus-based counterparts; for example (2) is  a counterpart of (4), and (3) is the counterpart of (5). Once we introduce gap-threading, however, the correspondence breaks down. In the conventional approach, each rule applies only to constituents whose semantic interpretation is of some particular single semantic type, say, functions from individuals to propositions. If every free variable in our approach is treated as a lambda variable in the conventional approach, then no one rule can cover two expressions whose interpretation essentially involves different numbers of variables, since these would be of different semantic types. Hence, rules like (11) and (12), which cover constituents containing any number of gaps, would have to be replaced in the conventional approach by a separate rule for each possible number of gaps. Thus, our formalism enables us to write more general rules than is possible taking the conventional approach.</Paragraph>
    </Section>
  </Section>
class="xml-element"></Paper>
Download Original XML