File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/83/p83-1009_metho.xml

Size: 35,359 bytes

Last Modified: 2025-10-06 14:11:35

<?xml version="1.0" standalone="yes"?>
<Paper uid="P83-1009">
  <Title>AN IMPROPER TREATMENT OF QUANTIFICATION IN ORDINARY ENGLISH</Title>
  <Section position="1" start_page="0" end_page="0" type="metho">
    <SectionTitle>
AN IMPROPER TREATMENT OF QUANTIFICATION IN ORDINARY ENGLISH
</SectionTitle>
    <Paragraph position="0"/>
  </Section>
  <Section position="2" start_page="0" end_page="57" type="metho">
    <SectionTitle>
i. The Problem
</SectionTitle>
    <Paragraph position="0"> Consider the sentence In most democratic countries most politicians can fool most of the people on almost every issue most of the time.</Paragraph>
    <Paragraph position="1"> In the currently standard ways of representing quantification in logical form, this sentence has 120 different readings, or quantifier scopings. Moreover, they are truly distinct, in the sense that for any two readings, there is a model that satisfies one and not the other. With the standard logical forms produced by the syntactic and semantic translation components of current theoretical frameworks and implemented systems, it would seem that an inferencing component must process each of these 120 readings in turn in order to produce a best reading. Yet it is obvious that people do not entertain all 120 possibilities, and people really do understand the sentence. The problem is not Just that inferencing is required for disamblguation. It is that people never do dlsambiguate completely. A single quantifier scoping is never chosen. (Van Lehn \[1978\] and Bobrow and Webber \[1980\] have also made this point.) In the currently standard logical notations, it is not clear how this vagueness can be represented. 1 What is needed is a logical form for such sentences that is neutral with respect to the various scoplng possibilities. It should be a notation that can be used easily by an inferenclng component. That is, it should be easy to define deductive operations on it, and the lo~ical forms of typical sentences should not be unwieldy.</Paragraph>
    <Paragraph position="2"> Moreover, when the inferenclng component discovers further information about dependencies among sets of entities, it should entail only a minor modification in the logical form, such as conjoining a new proposition, rather than a major restructuring. Finally, since the notion of &amp;quot;scope&amp;quot; is a powerful tool in semantic analysis, there should be a fairly transparent relationship between dependency information In the notation and standard representations of scope.</Paragraph>
    <Paragraph position="3"> Three possible approaches are ruled out by these criteria.</Paragraph>
    <Paragraph position="4"> i. Representing the sentence as a disjunction of the various readings. This is impossibly unwieldy.</Paragraph>
    <Paragraph position="5"> I Many people feel that most sentences exhibit too few quantifier scope ambiguities for much effort to be devoted to this problem, but a casual inspection of several sentences from any text should convince almost everyone otherwise.</Paragraph>
    <Paragraph position="6"> 2. Using as the logical notation a triple consisting of an expression of the propositional content of the sentence, a store of quantifier structures (e.g., as in Cooper \[1975\], Woods \[19781), and a set of constraints on how the quantifier structures could be unstored. This would adequately capture the vagueness, but it is difficult to imagine defining inference procedures that would work on such an object. Indeed, Cooper did no inferenclng; Woods did little and chose a default reading heuristically before doing so.</Paragraph>
    <Paragraph position="7"> 3. Using a set-theoretlc notation like that of (I) below, pushing all the universal quantifiers to the outside and the existential quantifiers to the inside, and replacing the existentially quantified variables by Skolem functions of all the universally quantlf~ed variables. Then when inferencing discovers a nondependency, one of the arguments is dropped from one of the Skolem functions. One difficulty with this is that it yields representations that are too general, being satisfied by models that correspond to none of the possible intended interpretations. Moreover, in sentences in which one quantified noun phrase syntactically embeds another (what Woods \[1978\] calls &amp;quot;functional nesting&amp;quot;), as in Every representative of a company arrived.</Paragraph>
    <Paragraph position="8"> no representation that is neutral between the two is immediately apparent. With wide scope, &amp;quot;a company&amp;quot; is existential, with narrow scope it is universal, and a shift in commitment from one to the other would involve significant restructuring of the logical form.</Paragraph>
    <Paragraph position="9"> The approach taken here uses the notion of the &amp;quot;typical element'&amp;quot; of a set, to produce a flat logical form of conjoined atomic predications. A treatment has been worked out only for monotone increasing determiners; this is described in Section 2. In Section 3 some ideas about other determiners are discussed. An inferenclng component, such as that explored in Hobbs \[1976, 1980\], capable of resolving coreference, doing coercions, and refining predicates, will be assumed (but not discussed). Thus, translating the quantifier scoping problem into one of those three processes will count as a solution for the purposes of this paper.</Paragraph>
    <Paragraph position="10"> This problem has received little attention in linguistics and computational linguistics. Those who have investigated the processes by which a rich knowledge base is used in interpreting texts have largely ignored quantifier ambiguities.</Paragraph>
    <Paragraph position="11"> Those who have studied quantifiers have generally noted that inferencing is required for  disambiguation, without attempting to provide a notation that would accommodate this inferencing. There are some exceptions. Bobrow and Webber \[1980\] discuss many of the issues involved, but it is not entirely clear what their proposals are.</Paragraph>
    <Paragraph position="12"> The work of Webber \[1978\] and Melllsh \[1980\] are discussed below.</Paragraph>
  </Section>
  <Section position="3" start_page="57" end_page="60" type="metho">
    <SectionTitle>
2. Monotone I~creasin~ Determiners
</SectionTitle>
    <Paragraph position="0"/>
    <Section position="1" start_page="57" end_page="57" type="sub_section">
      <SectionTitle>
2.1. A Set-Theoretic Notation
</SectionTitle>
      <Paragraph position="0"> Let us represent the pattern of a simple intransitive sentence with a quantifier as &amp;quot;Q Ps R&amp;quot;. In '~ost men work,&amp;quot; Q - &amp;quot;most&amp;quot;, P = &amp;quot;man&amp;quot;, and R - &amp;quot;work&amp;quot;. Q will be referred to as a determiner. A determiner Q is monotone increasing if and only if for any RI and R2 such that the denotation of R1 is a subset of the denotation of R2, &amp;quot;Q Ps RI&amp;quot; implies &amp;quot;Q Ps R2&amp;quot; (Barwlse and Cooper \[1981\]). For example, letting RI - &amp;quot;work hard&amp;quot; and R2 = &amp;quot;work&amp;quot;, since &amp;quot;most men work hard&amp;quot; implies &amp;quot;most men work,&amp;quot; the determiner &amp;quot;most&amp;quot; is monotone increasing. Intuitively, making the verb phrase more general doesn't change the truth value. Other monotone increasing determiners are &amp;quot;every&amp;quot;, &amp;quot;some&amp;quot;, &amp;quot;many&amp;quot;, &amp;quot;several&amp;quot;, &amp;quot;'any&amp;quot; and &amp;quot;a few&amp;quot;. &amp;quot;No&amp;quot; and &amp;quot;few&amp;quot; are not.</Paragraph>
      <Paragraph position="1"> Any noun phrase Q Ps with a monotone increasing determiner Q involves two sets, an intensionally defined set denoted by the noun phrase minus the determiner, the set of all Ps, and a nonconstructlvely specified set denoted by the entire noun phrase. The determiner Q can be viewed as expressing a relation between these two sets. Thus the sentence pattern Q Fs R can be represented as follows:</Paragraph>
      <Paragraph position="3"> That is, there is a set s which bears the relation Q to the set of all Ps, and R is true of every element of s. (Barwlse and Cooper call s a &amp;quot;witness set&amp;quot;.) &amp;quot;Most men work&amp;quot; would be</Paragraph>
      <Paragraph position="5"> For collective predicates such as &amp;quot;meet&amp;quot; and &amp;quot;agree&amp;quot;, R would apply to the set rather than to each of its elements.</Paragraph>
      <Paragraph position="7"> Sometimes with singular noun phrases and determiners llke &amp;quot;a&amp;quot;, &amp;quot;some&amp;quot; and &amp;quot;any&amp;quot; it will be more convenient to treat the determiner as a relation between a set and one of its elements.</Paragraph>
      <Paragraph position="9"> According to notation (i) there are two aspects to quantification. The first, which concerns a relation between two sets, is discussed in Section 2.2. The second aspect involves a predication made about the element~ of one of the sets. The approach taken here to this aspect of quantification is somewhat more radical, and depends on a view of semantics that might be called &amp;quot;ontological promiscuity&amp;quot;. This is described briefly in Section 2.3. Then in Section</Paragraph>
    </Section>
    <Section position="2" start_page="57" end_page="58" type="sub_section">
      <SectionTitle>
2.4 the scope-neutral representation is presented.
</SectionTitle>
      <Paragraph position="0"> 2.2. Determiners as Relations between Sets Expressing determiners as relations between sets allows us to express as axioms in a knowledge base more refined properties of the determiners than can be captured by representing them in terms of the standard quantlflers.</Paragraph>
      <Paragraph position="1"> First let us note that, with the proper definitions of &amp;quot;every&amp;quot; and &amp;quot;some&amp;quot;,</Paragraph>
      <Paragraph position="3"> formula (I) reduces to the standard notation.</Paragraph>
      <Paragraph position="4"> (This can be seen as explaining why the restriction is implicative in universal quantification and conjunctive in existential quantification.) A meaning postulate for &amp;quot;most&amp;quot; that is perhaps too mathematical is</Paragraph>
      <Paragraph position="6"> Next, consider &amp;quot;any&amp;quot;. Instead of trying to force an interpretation of &amp;quot;any&amp;quot; as a standard quantifier, let us take it to mean &amp;quot;a random  element of&amp;quot;.</Paragraph>
      <Paragraph position="7"> (2) (~x,s) any(x,s) ~&gt; x = random(s),  where &amp;quot;random&amp;quot; is a function that returns a random element of a set. This means that the prototypical use of &amp;quot;any&amp;quot; is in sentences like Pick any card.</Paragraph>
      <Paragraph position="8"> Let me surround this with caveats. This can't be right, if for no other reason than that &amp;quot;any&amp;quot; is surely a more &amp;quot;primitive&amp;quot; notion in language than &amp;quot;random&amp;quot;. Nevertheless, mathematics gives us firm intuitions about &amp;quot;random&amp;quot; and (2) may thus shed light on some linguistic facts.</Paragraph>
      <Paragraph position="9"> Many of the linguistic facts about &amp;quot;any&amp;quot; can be subsumed under two broad characterizations: i. It requires a &amp;quot;modal&amp;quot; or &amp;quot;nondeflnlte&amp;quot; context. For example, &amp;quot;John talks to any woman&amp;quot; must be interpreted dispositlonally. If we adopt (2), we can see this as deriving from the nature of randomness. It simply does not make sense to say of an actual entity that it is random.</Paragraph>
      <Paragraph position="10"> 2. It normally acts as a universal quantifier outside the scope of the most immediate modal embedder. This is usually the most natural interpretation of &amp;quot;random&amp;quot;.</Paragraph>
      <Paragraph position="11"> Moreover, since &amp;quot;any&amp;quot; extracts a single element, we can make sense out of cases in which &amp;quot;any&amp;quot; fails to act llke &amp;quot;every&amp;quot;.  I'Ii talk to anyone but only to one person.</Paragraph>
      <Paragraph position="12"> * I'Ii talk to everyone but only to one person. John wants to marry any Swedish woman.</Paragraph>
      <Paragraph position="13"> * John wants to marry every Swedish woman.</Paragraph>
      <Paragraph position="14"> (The second pair is due to Moore \[1973\].) This approach does not, however, seem to offer an especially convincing explanation as to why &amp;quot;any&amp;quot; functions in questions as an existential quantifier.</Paragraph>
    </Section>
    <Section position="3" start_page="58" end_page="58" type="sub_section">
      <SectionTitle>
2.3. Ontological Promiscuity
</SectionTitle>
      <Paragraph position="0"> Davidson \[1967\] proposed a treatment of action sentences in which events are treated as individuals. This facilitated the representation of sentences with adverbials. But virtually every predication that can be made in natural language can be modified adverbially, be specified as to time, function as a cause or effect of something else, constitute a belief, be nominalized, and be referred to pronominally. It is therefore convenient to extend Davidson's approach to all predications, an approach that might be called &amp;quot;ontological promiscuity&amp;quot;. One abandons all ontological scruples. A similar approach is used in many AI systems.</Paragraph>
      <Paragraph position="1"> We will use what might be called a &amp;quot;nomlnalization&amp;quot; operator ..... for predicates. Corresponding to every n-ary predicate p there will be an n+l-ary predicate p&amp;quot; whose first argument can be thought of as a condition of p's being true of the subsequent arguments. Thus, if &amp;quot;see(J,B)&amp;quot; means that John sees Sill, &amp;quot;see'(E,J,S)&amp;quot; will mean that E is John's seeing of Bill. For the purposes of this paper, we can consider that the primed and unprimed predicates are related by the following axiom schema:</Paragraph>
      <Paragraph position="3"> It is beyond the scope of this paper to elaborate on the approach further, but it will be assumed, and taken to extremes, in the remainder of the paper. Let me illustrate the extremes to which it will be taken. Frequently we want to refer to the condition of two predicates p and q holding simultaneously of x. For this we will refer to the entity e such that</Paragraph>
      <Paragraph position="5"> Here el is the condition of p being true of x, e2 is the condition of q being true of X, and e the condition of the conjunction being true.</Paragraph>
    </Section>
    <Section position="4" start_page="58" end_page="60" type="sub_section">
      <SectionTitle>
2.4. The Scope-NeuC/ral Representation
</SectionTitle>
      <Paragraph position="0"> We will assume that a set has a typical element and that the logical form for a plural noun phrase will include reference to a set and its ~z~ical element. 2 The linguistic intuition 2 Woods \[1978\] mentions something llke this approach, but rejects it because difficulties that are worked out here would have to be worked out.</Paragraph>
      <Paragraph position="1"> behind this idea is that one can use singular pronouns and definite noun phrases as anaphors for plurals. Definite and indefinite generics can also be understood as referring to the typical element of a set.</Paragraph>
      <Paragraph position="2"> In the spirit of ontological promiscuity, we simply assume that typical elements of s~ ~re things that exist, and encode in meaning postulates the necessary relations between a set's typical element and its real elements. This move amounts to reifying the universally quantified variable. The typical element of s will be referred to as ~(s).</Paragraph>
      <Paragraph position="3"> There are two very nearly contradictory properties that typical elements must have. The first is the equivalent of universal instantiation; real elements should inherit the properties of the typical element. The second is that the typical element cannot itself be an element of the set, for that would lead to cardinallty problems. The two together would imply the set has no elements. 3 We could get around this problem by positing a special set of predicates that apply to typical elements and are systematically related to the predicates that apply to real elements. This idea should be rejected as being ad ho__~c, if aid did not come to us from an unexpected quarter -- the notion of &amp;quot;grain size&amp;quot;.</Paragraph>
      <Paragraph position="4"> When utterances predicate, it is normally at some degree of resolution, or &amp;quot;grain&amp;quot;. At a fairly coarse grain, we might say that John is at the post office -- &amp;quot;at(J,PO)&amp;quot;. At a more refined grain, we have to say that he is at the stamp window -- &amp;quot;at(J,SW)'&amp;quot; We normally think of grain in terms of distance, but more generally we can move from entities at one grain to entities at a coarser grain by means of an arbitrary partition. Fine-grained entities in the same equivalence class are indistinguishable at the coarser grain. Given a set S, consider the partition that collapses all elements of S into one element and leaves everything else unchanged. We can view the typical element of S as the set of real elements seen at this coarser grain -- a grain at which, precisely, the elements of the set are indistinguishable. Formally, we can define an operator ~ which takes a set and a predicate as its arguments and produces what will be referred to as an &amp;quot;indexed predicate&amp;quot;:</Paragraph>
      <Paragraph position="6"> We will frequently abbreviate this &amp;quot;P5 &amp;quot; Note that predicate indexing gets us out of the above 3 An alternative approach would be to say that the typical element is in fact one of the real elements of the set, but that we will never know which one, and that furthermore, we will never know about the typical element any property that is not true of all the elements. This approach runs into technical difficulties involving the empty set.</Paragraph>
      <Paragraph position="7">  contradiction, for now &amp;quot;~(s) E 5 s&amp;quot; is not only true but tautologous.</Paragraph>
      <Paragraph position="8"> We are now in a position to state the properties typical elements should have. The first implements universal instantiation:  (4) (Us,y) p$(~(s)) &amp; yes -&gt; p(y) (5) (Vs)(\[(Y=x~s) p(x)\] -&gt; p~(~s)))  That is, the properties of the typical element at the coarser grain are also the properties of the real elements at the finer grain, and the typical element has those properties that all the real elements have.</Paragraph>
      <Paragraph position="9"> Note that while we can infer a property from set membership, we cannot infer set membership from a property. That is, the fact that p is true of a typical element of a set s and p is true of an entity y, does not imply that y is an element of s. After all, we will want &amp;quot;three men&amp;quot; to refer to a set, and to be able to infer from y's being in the set the fact that y is a man. But we do not want to infer from y's being a man that y is in the set. Nevertheless, we will need a notation for expressing this stronger relation among a set, a typical element, and a defining condition. In particular, we need it for representing &amp;quot;every man&amp;quot;, Let us develop the notation from the standard notation for intensionally defined sets,  (6) s - {x f p&lt;x)}, by performing a fairly straightforward, though ontologically promiscuous, syntactic translation on it. First, instead of viewing x as a universally quantified variable, let us treat it as the typical element of s. Next, as a way of getting a handle on &amp;quot;p(x)&amp;quot;, we will use the nominalization operator .... to reify it, and refer to the condition e of p (or p$) being true of the typical element x of s -- &amp;quot;p~ (e,x)&amp;quot;. Expression (6) can then be translated into the following flat predlcate-argument form: (7) set(s,x,e) &amp; p~ (e,x)  This should be read as saying that s is a set whose typical element is x and which is defined by condition e, which is the condition of p (interpreted at the level of the typical element) being true of x. The two critical properties of the predicate &amp;quot;set&amp;quot; which make (7) equivalent to  (6) are the following: (8) ~s,x,e,y) set(s,x,e) &amp; p~ (e,x) &amp; p(y) -&gt; yes (9) (~s,x,e) set(s,x,e) -&gt; x &amp;quot;T(s) Axiom schema (8) tells us that if an entity y has the defining property p of the set s, then y is an element of s. Axiom (9), along with axiom schemas (4) and (3), tells us that an element of a set has  the act's defining property.</Paragraph>
      <Paragraph position="10"> With what we have, we can represent the distinction between the distributive and collective readings of a sentence like (I0) The men lifted the piano.</Paragraph>
      <Paragraph position="11"> For the collective reading the representation would include &amp;quot;llft(m)&amp;quot; where m is the set of men. For the distributive reading, the representation would have &amp;quot;lift(~(m))&amp;quot;, where ~(m) is the typical element of the set m. To represent the ambiguity of (I0), we could use the device suggested in Hobbs \[1982 I for prepositional phrase and other ambiguities, and wr~te &amp;quot;llft(x) &amp; (x=m v x- ~(m) )&amp;quot;.</Paragraph>
      <Paragraph position="12"> This approach involves a more thorough use of typical elements than two previous approaches. Webber \[1978\] admitted both set and prototype (my typical element) interpretations of phrases like &amp;quot;each man'&amp;quot; in order to have antecedents for both &amp;quot;they&amp;quot; and &amp;quot;he&amp;quot;, but she maintained a distinction between the two. Essentially, she treated &amp;quot;each man&amp;quot; as ambiguous, whereas the present approach makes both the typical element and the set available for subsequent reference. Mellish \[1980 1 uses =yplcal elements strictly as an intermediate representation that must be resolved into more standard notation by the end of processing. He can do this because he is working in a task domain -- physics problems -- in which sets are not just finite but small, and vagueness as to their composition must be resolved. Webber did not attempt to use typical elements to derive a scope-neutral representation; Mellish did so only in a limited way.</Paragraph>
      <Paragraph position="13"> Scope dependencies can now be represented as relations among typical elements. Consider the sentence (II) Most men love several women, under the reading in which there is a different set of women for each man. We can define a dependency function f which for each man returns the set of women whom that man loves.</Paragraph>
      <Paragraph position="15"> The relevant parts of the initial logical form, produced by a syntactic and semantic translation component, for sentence (Ii) will be</Paragraph>
      <Paragraph position="17"> where ml is the set of all men, m the set of most of them referred to by the noun phrase &amp;quot;most men&amp;quot;, and w the set referred to by the noun phrase &amp;quot;several women&amp;quot;, and where &amp;quot;manl = ~'(ml,man)&amp;quot; and &amp;quot;womanl = ~&amp;quot; (w,woman)'. When the inferenclng component discovers there is a different set w for each element of the set m, w can be viewed as refering to the typical element of this set of sets:</Paragraph>
      <Paragraph position="19"> To eliminate the set notation, we can extend the definition of the dependency function to the typical element of m as follows:</Paragraph>
      <Paragraph position="21"> That is, f maps the typical element of a set into the typical element of the set of images under f of the elements of the set. From here on, we will consider all dependency functions so extended to the typical elements of their domains.</Paragraph>
      <Paragraph position="22"> The identity &amp;quot;w - f(~(m))&amp;quot; now simultaneously encodes the scoplng information and involves only existentially quantified variables denoting individuals in an (admittedly ontologlcally promiscuous) domain. Expressions llke (12) are thus the scope-~eutral representation, and scoplng information is added by conjoining such identities.</Paragraph>
      <Paragraph position="23"> Let us now consider several examples in which processes of interpretation result in the acquisition of scoplng information. The first will involve interpretation against a small model. The second will make use of world knowledge, while the third illustrates the treatment of embedded quantlflers.</Paragraph>
      <Paragraph position="24"> First the simple, and classic, example.</Paragraph>
      <Paragraph position="25"> (13) Every man loves some woman.</Paragraph>
      <Paragraph position="26"> The initial logical form for this sentence includes the following: lovel(r(ms),w) &amp; manl(~(ms)) &amp; woman(w) where &amp;quot;lovel -@(mS,Ax\[love(x,w)\])'&amp;quot; and &amp;quot;manl (ms,man)&amp;quot;. Figure i illustrates two small models of this sentence. M is the set of men {A,B}, W is the set of women {X,Y}, and the arrows signify love. Let us assume that the process of interpreting this sentence is Just the process of identifying the existentially quantified variables ms and w and possibly coercing the predicates, in a way that makes the sentence true. 4  In Figure l(a), &amp;quot;'love(A,X)&amp;quot; and &amp;quot;love(B,X)&amp;quot; are both true, so we can use axiom schema (5) to derive &amp;quot;lovel('~(M),X)&amp;quot;. Thus, the identifications &amp;quot;ms - M'&amp;quot; and &amp;quot;w = X'&amp;quot; result in the sentence being true.</Paragraph>
      <Paragraph position="27"> In Figure l(b), &amp;quot;love(A,X)&amp;quot; and &amp;quot;love(B,Y)&amp;quot; are both true, but since these predications differ</Paragraph>
    </Section>
  </Section>
  <Section position="4" start_page="60" end_page="62" type="metho">
    <SectionTitle>
4 Bobrow and Webber \[1980\] similarly show scoplng
</SectionTitle>
    <Paragraph position="0"> information acquired by Interpretatlon against a small model.</Paragraph>
    <Paragraph position="1"> in more than one argument, we cannot apply axiom schema (5). First we define a dependency function f, mapping each man into a woman he loves, yielding &amp;quot;love(A,f(A))&amp;quot; and &amp;quot;love(B,f(B))&amp;quot;. We can now apply axiom schema (5) to derive '&amp;quot; love2 ('~ (M), f (~ (M)) ) &amp;quot;, where &amp;quot;love2 = ~(M,Ax\[love(x,f(x))\])&amp;quot;. Thus, we can make the sentence true by identifying ms with M and w with f(~'(M)), and by coercing &amp;quot;love&amp;quot; to &amp;quot;'love2&amp;quot; and &amp;quot;woman&amp;quot; to &amp;quot;~ (W,woman)&amp;quot;. , In each case we see that the identification of w is equivalent to solving the scope ambiguity problem.</Paragraph>
    <Paragraph position="2"> In our subsequent examples we will ignore the indexing on the predicates, until it must be mentioned in the case of embedded quantifiers.</Paragraph>
    <Paragraph position="3"> Next consider an example in which world knowledge leads to disamblguatlon: Three women had a baby.</Paragraph>
    <Paragraph position="4"> Before inferencing, the scope-neutral representation is had(~Z~ws),b) &amp; lwsI=3 &amp; woman(~(ws)) &amp; baby(b) Let us suppose the inferencing component has axioms about the functionality of having a baby -something llke (~ x,y) had(x,y) -&gt; x = mother-of(y) and that we know about cardlnallty the fact that for any function g and set s, Ig(s)l ~ fsl Then we know the following: 3 - lwsl = Imother-of(b) I ~ Ibl This tells us that b cannot be an individual but must be the typical element of some set. Let f be a dependency function such that wEws &amp; f(w) = x -&gt; had(w,x) that is, a function that maps each woman into some baby she had. Then we can identify b with f('~'(ws)), or equivalently, with ~({f(w) I w~ ws}), giving us the correct scope. Finally, let us return to interpretation with respect to small models to see how embedded quantiflers are represented. Consider (14) Every representative of a company arrived. The initial logical form.includes</Paragraph>
    <Paragraph position="6"> That is, r arrives, where r is the typical element of a set rs defined by the conjunction ea of r's being a representative and r's being of c, where c is a company. We will consider the two models in  {A,B,(C)}, K is the set of companies {X,Y,(Z,W)}, there is an arrow from the representatives to the companies they represent, and the representatives who arrived are circled.</Paragraph>
    <Paragraph position="7">  In Figure 2(a), &amp;quot;of(A,X)&amp;quot;, &amp;quot;of(B,Y)&amp;quot; and &amp;quot;of(B,Z)&amp;quot; are true. Define a dependency function f to map A into X and B into Y. Then &amp;quot;of(A,f(A))&amp;quot; and &amp;quot;of(B,f(B))&amp;quot; are both true, so that &amp;quot;of(~(R),f(~(R)))&amp;quot; is also true. Thus we have the following identifications: c = f(Z(R)) =~({X,Y}), rs = R, r -t(R) In Figure 2(b) &amp;quot;of(B~&amp;quot; and &amp;quot;of(C,Y)'&amp;quot; are both true, so &amp;quot;'of(~'(Rl),~)is also. Thus we may let c be Y and rs be RI, giving us the wide reading for &amp;quot;a company&amp;quot;.</Paragraph>
    <Paragraph position="8"> In the case where no one represents any company and no one arrived, we can let c be anything and rs be the empty set. Since, by the definition of o&amp;quot; , any predicate indexed by the empty set will be true of the typical element of the empty set, &amp;quot;arrlve#(~(# ))&amp;quot; will be true, and the sentence will be satisfied.</Paragraph>
    <Paragraph position="9"> It is worth pointing out that this approach solves the problem of the classic &amp;quot;donkey sentences&amp;quot;. If in sentence (14) we had had the verb phrase &amp;quot;hates it&amp;quot;, then &amp;quot;it&amp;quot; would be resolved to c, and thus to whatever c was resolved to.</Paragraph>
    <Paragraph position="10"> So far the notation of typical elements and dependency functions has been introduced; it has been shown how scope information can be represented by these means; and an example of inferential processing acquiring that scope information has been given. Now the precise relation of this notation to standard notation must be specified. This can be done by means of an algorithm that takes the inferential notation, together with an indication of which proposition is asserted by the sentence, and produces In the conventional form all of the readings consistent with the known dependency information.</Paragraph>
    <Paragraph position="11"> First we must put the sentence into what will be called a &amp;quot;bracketed notation&amp;quot;. We associate with each variable v an indication of the corresponding quantifier; this is determined from such pieces of the inferential logical form as those involving the predicates &amp;quot;set&amp;quot; and &amp;quot;most&amp;quot;; in the algorithm below it is refered to as &amp;quot;Quant(v)&amp;quot;. The translation of the remainder of the inferential logical form into bracketed notation is best shown by example. For the sentence A representative of every company saw a sample the relevant parts of the inferential logical form are</Paragraph>
    <Paragraph position="13"> where &amp;quot;see(r,s) 'deg is asserted. This is translated &amp;quot; in a straightforward way into (18) see(It I rep(r) &amp; of(r,\[c I co(c)l)\], Is I sample(s)\]) This may be read &amp;quot;An r such that r is a representative and r is of a c such that c is a company sees an s such that s is a sample.</Paragraph>
    <Paragraph position="14"> The nondeterministic algorithm below generates all the scoplngs from the bracketed notation. The function TOPBVS returns a llst of all the top-level bracketed variables in Form, that is, all the bracketed variables except those within the brackets of some other variable -- in (18) r and s but not c. BRANCH nondetermlnistically generates a separate process for each element in a list it is given as argument. A four-part notation is used for quantifiers (similar to that of Woods \[1978\]) -&amp;quot;(quantifier varlabie restriction body)&amp;quot;.  In this algorithm the first BRANCH corresponds to the choice in ordering the top-level quantifiers. The variable chosen will get the narrowest scope. The second BRANCH corresponds to the decision of whether or not to give an embedded quantifier a wide reading. The choice R corresponds to a wide reading, G(R) to a narrow reading. The third BRANCH corresponds to the decision of how wide a reading to give to an embedded quantifier.</Paragraph>
    <Paragraph position="15"> Dependency constraints can be built into this algorithm by restricting the elements of its argument that BRANCH can choose. If the variables x and y are at the same level and y is dependent on x, then the first BRANCH cannot choose x. If y is embedded under x and y is dependent on x, then the second BRANCH must choose G(R). In the third BRANCH, if any top-level bracketed variable in Form is dependent on any variable one level of recurslon up, then G(Form) must be chosen.</Paragraph>
    <Paragraph position="16"> A fuller explanation of this algorithm and several further examples of the use of this notation are given in a longer version of this paper.</Paragraph>
  </Section>
  <Section position="5" start_page="62" end_page="62" type="metho">
    <SectionTitle>
3. Other Determlners
</SectionTitle>
    <Paragraph position="0"> The approach of Section 2 will not work for monotone decreasing determiners, such as &amp;quot;few&amp;quot; and &amp;quot;no&amp;quot;. Intuitively, the reason is that the sentences they occur in make statements about entities other than just those in the sets referred to by the noun phrase. Thus, Few men work.</Paragraph>
    <Paragraph position="1"> is more a negative statement about all but a few of the men than a positive statement about few of them. One possible representation would be similar to (I), but wlth the implication reversed.</Paragraph>
    <Paragraph position="3"> This is unappealing, however, among other things, because the predicate P occurs twice, making the relation between sentences and logical forms less direct.</Paragraph>
    <Paragraph position="4"> Another approach would take advantage of the above intuition about what monotone decreasing determiners convey.</Paragraph>
    <Paragraph position="6"> That is, we convert the sentence into a negative assertion about the complement of the noun phrase, reducing this case tO the monotone increasing case. For example, &amp;quot;few men work&amp;quot; would be represented as follows:</Paragraph>
    <Paragraph position="8"> (This formulation is equivalent to, but not identical with, Barwlse and Cooper's \[1981\] witness set condition for monotone decreasing determiners.) Some determiners are neither monotone increasing nor monotone decreasing, but Barwlse and Cooper conjecture that it is a linguistic universal that all such determiners can be expressed as conjunctions of monotone determiners. For example, &amp;quot;exactly three&amp;quot; means &amp;quot;at least three and at most three&amp;quot;. If this is true, then they all yield to the approach presented here.</Paragraph>
    <Paragraph position="9"> Moreover, because of redundancy, only two new conjuncts would be introduced by this method.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML