File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/90/j90-4002_metho.xml

Size: 93,859 bytes

Last Modified: 2025-10-06 14:12:35

<?xml version="1.0" standalone="yes"?>
<Paper uid="J90-4002">
  <Title>SENTENTIAL SEMANTICS FOR PROPOSITIONAL ATTITUDES</Title>
  <Section position="3" start_page="0" end_page="0" type="metho">
    <SectionTitle>
1.1 KAPLAN'S ANALYSIS OF DE RE BELIEF
REPORTS
</SectionTitle>
    <Paragraph position="0"> Noun phrases in the scope of attitude verbs commonly have an ambiguity between de re and de dicto readings. Consider the example &amp;quot;John believes that Miss America is bald&amp;quot; (Dowty, Wall, and Peters 1981). Under the de re reading of &amp;quot;Miss America,&amp;quot; this sentence says that John has a belief about a woman who in fact is Miss America, but it doesn't imply that John realizes she is Miss America.</Paragraph>
    <Paragraph position="1"> A sentential theorist might say that the sentence tells us that John has a belief containing some name that denotes Miss America, but it doesn't tell us what name. The other reading, called de dicto, says that John believes that whoever is Miss America is bald. The de dicto reading, unlike the de re, does not imply that anyone actually is Miss America--it could be true if the Miss America pageant closed down years ago, while John falsely supposes that someone still holds that title.</Paragraph>
    <Paragraph position="2"> Kaplan (1975) considered examples like these. He said that an agent may use many names that denote the same entity, but there is a subset of those names that represent the entity to the agent (this use of &amp;quot;represent&amp;quot; is different from the common use in AI). If an agent has a de re belief about an entity x, that belief must be a sentence containing, not just any term that denotes x, but a term that represents x to the agent. Thus if &amp;quot;person0&amp;quot; is a name that represents Miss America to John, and the thought language sentence &amp;quot;bald(person0)&amp;quot; is one of John's beliefs, then the sentence &amp;quot;John thinks Miss America is bald&amp;quot; is true (under the de re reading).</Paragraph>
    <Paragraph position="3"> Kaplan said that a name represents an entity to an agent if, first, it denotes that entity; second, it is sufficiently vivid; and, finally, there is a causal connection between the entity and the agent's use of the name. A name N is vivid to an agent if that agent has a collection of beliefs that mention N and give a good deal of relevant information about the denotation of N. What is relevant may depend on the agent's interests.</Paragraph>
    <Paragraph position="4"> Other authors have accepted the idea of a distinguished subset of names while offering different proposals about how these names are distinguished. I have argued that the distinguished names must provide information that the agent needs to achieve his or her current goals (Haas 1986). Konolige (1986) proposed that for each agent and each entity, the set of distinguished names has exactly one member. In this paper, we adopt Kaplan's term &amp;quot;represent&amp;quot; without necessarily adopting his analysis of the notion. We assume that representation is a relation between an agent, a name, and the entity that the name denotes. If an agent has an attitude toward a thought-language sentence, and that sentence contains a name that represents a certain entity to the agent, then the agent has a de re attitude about that entity. Our grammar will build logical forms that are compatible with any sentential theory that includes these assumptions.</Paragraph>
    <Paragraph position="5"> One problem about the nature of representation should be mentioned. This concerns the so-called de se attitude reports. This term is attributable to Lewis (1979), but the clearest definition is from B6er and Lycan (1986). De se attitudes are &amp;quot;attitudes whose content would be formulated by the subject using the equivalent in his or her language of the first-person singular pronoun 'I' &amp;quot; (B6er and Lycan 1986). If John thinks that he is wise, and we understand this a,; a de se attitude, what name represents John to himself? One possibility is that it is his selfname. An agent's selfname is a thought-language constant that he standardly uses to denote himself. It was postulated in Haa,; (1986) in order to solve certain problems about planning to acquire information. To expound and defend this idea would take us far from the problems of compositional semantics that concern us here. We simply mention it as an example of the kinds of theories that are compatible with the logical forms built by our grammar. See also Rapal:,ort (1986) for another AI approach to de se attitudes. null</Paragraph>
  </Section>
  <Section position="4" start_page="0" end_page="0" type="metho">
    <SectionTitle>
1.2 COMPOSITIONAL SEMANTICS AND LOGICAL
FORMS
</SectionTitle>
    <Paragraph position="0"> Consider the logical form that Kaplan assigns for the de re reading of &amp;quot;John believes that some man loves Mary.&amp;quot;</Paragraph>
    <Paragraph position="2"> The notation is a slight modification of Kaplan's (Kaplan 1975). The predicate letter R denotes representation. The symbol o~ is a special variable ranging over names. The symbols r-and-1 are Quine's quasi-quotes (Quine 1947). If denotes a name t, then the expression &amp;quot;qove(a,mary) q'' will denote the sentence &amp;quot;love(t,mary).&amp;quot; It is hard to see how a compositional semantics can build this representation from the English sentence &amp;quot;John believes some man loves Mary.&amp;quot; The difficult part is building the representation for the VP &amp;quot;believes some man loves Mary.&amp;quot; By definition, a compositional semantics must build the representation from the representations of the constituents of the VP: the verb &amp;quot;believe&amp;quot; and the embedded clause. Following Cooper's notion of quantifier storage (Cooper 19821), we assume that the representation of the embedded clause has two parts: the wtT&amp;quot;love(y,mary)&amp;quot; and an existential quantifier that binds the free variable y. Informally, we can write the quantifier as &amp;quot;some(y,man(y) &amp; S),&amp;quot; where S stands for the scope of the quantifier. Applying this quanti!fier to the wff &amp;quot;love(y,mary)&amp;quot; gives the sentence &amp;quot;some(y,man(y) &amp; love(y,mary)).&amp;quot; In the present paper, the term &amp;quot;quantifier&amp;quot; will usually refer to this kind of object--not to the symbols V and 3 of first-order logic, nor to the generalized quantifiers of Barwise and Cooper (1981).</Paragraph>
    <Paragraph position="3"> 214 Comlmtational Linguistics Volume 16, Number 4, December 1990 Andrew R. Haas Sentential Semantics for Propositional Attitudes In Section 2.2 we present a more precise formulation of our representation of quantifiers.</Paragraph>
    <Paragraph position="4"> When the clause &amp;quot;some man loves Mary&amp;quot; forms an utterance by itself, the semantics will apply the quantifier to the wff &amp;quot;love(y,mary)&amp;quot; to get the sentence &amp;quot;some (y,man(y) &amp; love(y,mary)).&amp;quot; The problem is that the wit &amp;quot;love(y,mary)&amp;quot; does not appear in Kaplan's representation. In its place is the expression &amp;quot;love(a,mary),&amp;quot; containing a variable that ranges over names, not men. It might be possible to build this expression from the wff &amp;quot;love(y,mary),&amp;quot; but this sounds like a messy operation at best. Similar problems would arise if we chose another quotation device (such as the one in Haas 1986) or another scoping mechanism (as in Pereira and Shieber 1987).</Paragraph>
    <Paragraph position="5"> Konolige (1986) proposed a very different notation for quantifying in, one that would abolish the difficulty described here. His proposal depends on an ingenious non-standard logic. Unfortunately, Konolige's system has two important limitations. First, he forbids a belief operator to appear in the scope of another belief operator. Thus, he rules out beliefs about beliefs, which are common in every-day life. Second, he assumes that each agent assigns to every known entity a unique &amp;quot;id constant.&amp;quot; When an agent has a belief about an object x, that belief contains the id constant for x. Using Kaplan's terminology, Konolige is saying that for any entity x and agent y, there is a unique a such that R(cqx,y). Kaplan never suggests that representation has this property, and as Moore (1988) pointed out, the claim is hard to believe. Surely an agent can have many names for an entity, some useful for one purpose and some for another. Why should one of them be the unique id constant? We will propose a notation that has the advantages of Konolige's notation without its limitations. Section 1.3 will present the new notation. In Section 1.4, we return to the problem of building logical forms for English sentences. null</Paragraph>
  </Section>
  <Section position="5" start_page="0" end_page="0" type="metho">
    <SectionTitle>
1.3 A NEW NOTATION FOR QUANTIFYING IN
</SectionTitle>
    <Paragraph position="0"> Our logical forms are sentences in a first-order logic augmented with a quotation operator. We call this language the target language. Since the grammar is a set of definite clauses, our notation is like Prolog's. The variables of the target language are u, v, w, x, y, z, etc. Constants, function letters, and atomic wffs are defined in the usual way. If p and q are wffs, then not(p), and(p,q), and or(p,q) are wits.</Paragraph>
    <Paragraph position="1"> Ifp and q are wffs, x a variable, and t a term, the following  are wffs: (2) some(x,p,q) (3) all(x,p,q) (4) unique(x,p,q) (5) let(x,t,p)  The first wff is true iffp and q are both true for some value of x. The second is true iff q is true for all values of x that make p true. The third is true iff there is exactly one value ofx that makes p true, and q is true for that value ofx. The last wit is true iff p is true when the value of x is set to the value of t. This language should be extended to include the iota operator, forming definite descriptions, since a definite description may often represent an entity to an agent. However, we omit definite descriptions for the time being. Fcr any expression e of the target language, q(e) is a constant of the target language. Therefore we have a countable infinity of constants. The intended models of our language are all first-order models in which the domain of discourse includes every expression of the language, and each constant q(e) has the expression e as its denotation. Models of this kind are somewhat unusual, but they are perfectly consistent with standard definitions of first-order logic, which allow the universe of discourse to be any nonempty set (Enderton 1972). Our language does depart from standard logic in one way. We allow a variable to appear inside a constant--for example, since v is a variable, q(v) is a constant that denotes the variable v. Enderton explicitly forbids this: &amp;quot;no symbol is a finite sequence of other symbols&amp;quot; (p. 68). However, allowing a variable to appear inside a constant is harmless, as long as we are careful about the definition of a free occurrence of a variable. We modify Enderton's definition (p. 75) by changing his first clause, which defines free occurrences of a variable in an atomic wff. We say instead that a variable v appears free in variable w iff v = w; no variable occurs free in any constant; a variable v occurs free in the term f(t 1 ... tn) iff it occurs free in one of t I ... tn; and v occurs free in the atomic wff p(t 1 ... tm) iff it occurs free in one of t I ... t,,,. Under this definition x does not occur free in the constant q(red(x)), although it does occur free in the wff red(x).</Paragraph>
    <Paragraph position="2"> As usual in a sentential theory of attitudes, we assume that an agent's beliefs are sentences of thought language stored in the head, and that knowledge consists of a subset of those sentences. Then simple belief is a relation between an agent and a sentence of thought language. To represent de re belief reports, we introduce a predicate of three arguments, and we define its extension in terms of simple belief and the notion of representation. If \[p,l,w\] is a triple in the extension of the predicate &amp;quot;believe,&amp;quot; then p is the agent who has the belief, l is a list of entities x~... xn that the belief is about, and w is a wff of the target language. The free variables in w will stand for unspecified terms that represent the entities x I ... x n to the agent p. These free variables are called dummy variables. If John believes of Mary that she is a fool, then, using Prolog's notation for lists we write (6) believe(john, \[mary\],q(fool(x))).</Paragraph>
    <Paragraph position="3"> The constant &amp;quot;mary&amp;quot; is called a de re argument of the predicate &amp;quot;believe.&amp;quot; The free occurrence of x in fool(x) stands for an unspecified term that represents Mary to John. This means that there is a term t that represents Mary to John, and John believes fool(t), x is the dummy variable for the de re argument &amp;quot;mary.&amp;quot; This notation is inspired by Quine (1975), but we give a semantics quite different from Quine's. Note that the symbol &amp;quot;believe&amp;quot; is Computational Linguistics Volume 16, Number 4, December 1990 215 Andrew R. Haas Sentential Semantics for Propositional Attitudes an ordinary predicate letter, not a special operator* This is a minor technical advantage of the sentential approach: the quotation operator eliminates the need for a variety of special propositional attitude operators* To define this notation precisely, we must have some way of associating dummy variables with the de re arguments* Suppose we have the wff believe(x,\[t 1 ... t,\],q(p)). Let v 1 *.. v, be a list of the free variables ofp in order of their first occurrence. Then v~ will be the dummy variable for t;. In other words, the dummy variable for i-th de re argument will be the i-th free variable ofp. This method of associating dummy variables with de re arguments is somewhat arbitrary--another possibility is to include an explicit list of dummy variables* Our choice will make the notation a little more compact.</Paragraph>
    <Paragraph position="4"> Then the extension of the predicate &amp;quot;believe&amp;quot; is defined as follows* Let s be an agent, \[x~... x,\] a list of entities from the domain of discourse, and p a wff of the target language* Suppose that p has exactly n free variables, and let v I ... v, be the free variables ofp in order of their first occurrence* Suppose that t~... t n are closed terms such that t i represents x~ to s, for i from 1 to n. Suppose the simple belief relation holds between s and the sentence formed by substituting t 1 ... t, for free occurrences v 1 ... v, in p. Then the extension of the predicate &amp;quot;believe&amp;quot; includes the triple containing s, \[x I ... x,\], and p.</Paragraph>
    <Paragraph position="5"> As an example, suppose the term &amp;quot;personl&amp;quot; represents Mary to John, and John believes &amp;quot;fool(personl).&amp;quot; Then, since substituting &amp;quot;person l&amp;quot; for &amp;quot;x&amp;quot; in &amp;quot;fool(x)&amp;quot; produces the sentence &amp;quot;fool(person 1),&amp;quot; it follows that (7) believe(john,\[mary\],q(fool(x))) is true in every intended model where &amp;quot;believe&amp;quot; has the extension defined above* Consider an example with quantifiers: &amp;quot;John believed a prisoner escaped*&amp;quot; The reading with the quantifier inside the attitude is easy: (8) believe(john, \[\],q(some(x,prisoner(x),escaped(x)))).</Paragraph>
    <Paragraph position="6"> In this case the list of de re arguments is empty* For the &amp;quot;quantifying in&amp;quot; reading we have: (9) some(x,prisoner(x),believe(john, \[x\],q(escaped(y)))).</Paragraph>
    <Paragraph position="7"> This says that for some prisoner x, John believes ofx that he escaped* The dummy variable y in the wff escaped(y) stands for an unspecified term that occurs in one of John's beliefs and represents the prisoner x to John.</Paragraph>
    <Paragraph position="8"> Let us consider nested beliefs, as in the sentence &amp;quot;John believed Bill believed Mary was wise.&amp;quot; Here the de re~de dicto ambiguity give rise to three readings* One is a straight-forward de dicto reading: (10) believe(john, \[\] ,q(believe(bill, \[\] ,q(wise(mary))))). To understand examples involving nested beliefs, it is helpful to write down the sentence that each agent believes* Since this example does not involve quantifying in, it is easy to write down John's belief~we just take the quotation mark off the last argument of &amp;quot;believe&amp;quot;:</Paragraph>
    <Paragraph position="10"> If this belief of John's is true, then Bill believes (12) wise(mary).</Paragraph>
    <Paragraph position="11"> In the next reading, the name &amp;quot;Mary&amp;quot; is de dicto for John, but de re for Bill: (13) believe(john, \[\] ,q(believe(bill, \[mary\] ,q(wise(x))))). Here, John is using the constant &amp;quot;mary&amp;quot; to denote Mary, but ihe does not necessarily think that Bill is using the same constant--he only thinks that some term represents Mary to Bill The sentence that John believes is (14) believe(bill, \[mary\],q(wise(x))).</Paragraph>
    <Paragraph position="12"> If John is right, Bill's belief is formed by substituting for the :free variable x in &amp;quot;wise(x)&amp;quot; some term that represents Mary to Bill. Suppose this term is &amp;quot;person0,&amp;quot; then Bill's belief would be (15) wise(person0).</Paragraph>
    <Paragraph position="13"> Finally, there is a reading in which &amp;quot;Mary&amp;quot; is de re for both agents: (16) believe(john,\[mary\],q(believe(bill,\[x\],q(wise(y))))). Here there is a name that represents Mary to John, and John thinks that there is a name that represents Mary to Bill. Again, John does not necessarily believe that Bill uses the same name that John uses. Suppose &amp;quot;person3&amp;quot; is the term that represents Mary to John, then John's belief would, be (17) believe(bill, \[person3\] ,q(wise(y))).</Paragraph>
    <Paragraph position="14"> If &amp;quot;pe, rson4&amp;quot; is the term that represents Mary to Bill, then Bill's belief would be (18) wise(person4).</Paragraph>
    <Paragraph position="15"> One might expect a fourth reading, in which &amp;quot;Mary&amp;quot; is de re for John and de dicto for Bill, but our formalism cannot represent such a reading. To see why, let us try to construct a sentence that represents this reading* In our notation a nonempty list of de re arguments represents a de re belief, while an empty list of de re arguments represents a de dicto belief. Therefore the desired sentence should have a nonempty list of de re arguments for John's belief, and a:a empty list for Bill's belief* This would give (19) believe(john, \[mary\],q(believe(bill, \[\],q(wise(x))))) This sentence does not assert that John believes Bill has a de dicto belief about Mary. To see this, consider John's belief. If he uses the constant &amp;quot;personl&amp;quot; to denote Mary, the belief is 216 Computational Linguistics Volume 16, Number 4, December 1990 Andrew R. Haas Sentential Semantics for Propositional Attitudes (20) believe(bill, \[\],q(wise(x))).</Paragraph>
    <Paragraph position="16"> In forming John's belief we do not substitute &amp;quot;personl&amp;quot; for the occurrence of x under the quotation operator--because by our definitions this is not a free occurrence of x. Thus John's belief says that Bill has a belief containing a free variable, which our theory forbids.</Paragraph>
    <Paragraph position="17"> It is not clear to me whether the desired reading exists in English, so I am not certain if this property of the notation is a bug or a feature. In either case, other notations for describing attitudes have similar properties. For example, in a modal logic of attitudes we use the scope of quantifiers to represent de re~de dicto distinctions. If a quantifier appears in the scope of an attitude operator, we have a de dicto reading, and if it appears outside the scope (while binding a variable inside the scope) we get a de re reading. In a sentence like &amp;quot;John thinks Bill thinks Mary saw a lion,&amp;quot; there are three places to put the existential quantifier: in the scope of Bill's belief operator, in the scope of John's operator but outside Bill's, or outside both. These give the same three readings that our formalism allows. To make &amp;quot;a lion&amp;quot; be de re for John and de dicto for Bill, we would have to put the quantifier outside the scope of John's belief operator, but inside the scope of Bill's belief operator. Since Bill's belief operator is in the scope of John's, that is impossible.</Paragraph>
    <Paragraph position="18"> The same method applies to other attitudes--for example, knowledge. Given a simple knowledge relation, which expresses de dicto readings of sentences with &amp;quot;know,&amp;quot; one can define the predicate &amp;quot;know,&amp;quot; which expresses both de re and de dicto readings. &amp;quot;Know&amp;quot; will take three arguments just as &amp;quot;believe&amp;quot; does.</Paragraph>
    <Paragraph position="19"> Next we consider examples like &amp;quot;John knows who likes Mary,&amp;quot; in which &amp;quot;know&amp;quot; takes a wh noun phrase and a sentence containing a gap. The intuition behind our analysis is that John knows who likes Mary if there is a person s such that John knows that s likes Mary. This is of course a de re belief report, and its logical form should be (21) some(x,person(x),know(john, \[x\],q(like(y,mary)))).</Paragraph>
    <Paragraph position="20"> As an example, suppose the sentence (22) like(bill,mary) is one of John's beliefs, and it belongs to the subset of beliefs that constitute his knowledge. If the constant &amp;quot;bill&amp;quot; represents Bill to John, then since substituting &amp;quot;bill&amp;quot; for &amp;quot;y&amp;quot; in &amp;quot;likes(y,mary)&amp;quot; gives the sentence &amp;quot;like(bill,mary),&amp;quot; we have (23) know(john,\[bill\],q(like(y,mary))) and therefore (24) some(x,person(x),know(john, \[x\],q(like(y,mary)))).</Paragraph>
    <Paragraph position="21"> This proposed analysis of &amp;quot;knowing who&amp;quot; is probably too weak. As a counter example, suppose a night watchman catches a glimpse of a burglar and chases him. Then the night watchman has formed a mental description of the burglar--a description that he might express in English as &amp;quot;the man I just saw sneaking around the building.&amp;quot; The burglar might say to himself, &amp;quot;He knows I'm in here.&amp;quot; This is a de re belief report, so it follows that the night watchman's mental description of the burglar must represent the burglar to the watchman (by our assumption about representation). Yet the night watchman surely would not claim that he knows who is sneaking around the building. It seems that even though the watchman's mental description represents the burglar, it is not strong enough to support the claim that he knows who the burglar is.</Paragraph>
    <Paragraph position="22"> It would be easy to extend our notation to allow for a difference between &amp;quot;knowing who&amp;quot; and other cases of quantification into attitudes. It would be much harder to analyze this difference, B/Ser and Lycan (1986) have argued that when we say someone knows who N is, we always mean that someone knows who N is for some purpose. This purpose is not explicitly mentioned, so it must be understood from the context of the utterance in which the verb &amp;quot;know&amp;quot; appears. Then the predicate that represents &amp;quot;knowing who&amp;quot; must have an extra argument whose value is somehow supplied by context. These ideas look promising, but to represent this use of context in a grammar is a hard problem, and outside the scope of this work.</Paragraph>
    <Paragraph position="23"> Next we consider intensional transitive verbs like &amp;quot;want,&amp;quot; as in &amp;quot;John wants a Porsche.&amp;quot; The intuition behind the analysis is that this sentence is roughly synonymous with &amp;quot;John wishes that he had a Porsche&amp;quot;--under a reading in which &amp;quot;he&amp;quot; refers to John. Then the logical form would be (25) wish(john,\[\],q(some(x,porsche(x),have(john,x)))) for a de dicto reading, and (26) some(x,porsche(x),wish(john, \[x\],q(have(john,y)))) for a de re reading. The predicate letter &amp;quot;wish&amp;quot; need not be identical to the one that translates the English verb &amp;quot;wish&amp;quot;--it might only be roughly synonymous. The predicate letter &amp;quot;have&amp;quot; probably is the same one that translates the verb &amp;quot;have&amp;quot;---or rather, one of many predicates that can translate this highly ambiguous verb. For the present purpose let us assume that the predicate &amp;quot;have&amp;quot; represents a sense of the verb &amp;quot;have&amp;quot; that is roughly synonymous with &amp;quot;possess,&amp;quot; as in &amp;quot;John has a Porsche.&amp;quot; Another sense of &amp;quot;have&amp;quot; is relational, as in &amp;quot;John has a son,&amp;quot; and &amp;quot;want&amp;quot; has a corresponding sense, as in &amp;quot;John wants a son.&amp;quot; The present paper will not analyze this relational sense.</Paragraph>
    <Paragraph position="24"> This grammar will express the meanings of intensional verbs in terms of propositional attitudes. This may not work for all intensional verbs. For example, it is not clear that &amp;quot;the Greeks worshipped Zeus&amp;quot; is equivalent to any statement about propositional attitudes. Montague (1974a) represented intensional verbs more directly, as relations between agents and the intensions of NP's. A similar analysis is possible in our framework, provided we extend the target language to include typed lambda calculus. Suppose the Computational Linguistics Volume 16, Number 4, December 1990 217 Andrew R. Haas Sentential Semantics for Propositional Attitudes variable p ranges over sets of individuals. Then we could represent the de dicto reading of&amp;quot;John wants a Porsche&amp;quot; as (27) want(john,q(lambda(p,some(x,porsche(x),x e p)))).</Paragraph>
    <Paragraph position="25"> Here the predicate &amp;quot;want&amp;quot; describes a relation between a person and an expression of thought language, but that expression is not a wff. Instead it is a closed term denoting a set of sets of individuals. Certainly this is a natural generalization of a sentential theory of attitudes. If agents can have attitudes toward sentences of thought language, why shouldn't they have attitudes toward other expressions of the same thought language?</Paragraph>
  </Section>
  <Section position="6" start_page="0" end_page="0" type="metho">
    <SectionTitle>
1.4 COMPOSITIONAL SEMANTICS AGAIN
</SectionTitle>
    <Paragraph position="0"> We now return to the problem of building logical forms with a compositional semantics. Consider the formula (28) some(x,prisoner(x),believe(john,\[x\],q(escaped(y)))).</Paragraph>
    <Paragraph position="1"> Following Cooper as before, we assume that the semantic features of the clause &amp;quot;a prisoner escaped&amp;quot; are a wff containing a free variable and an existential quantifier that binds the same variable. In formula (28) the existential quantifier does not bind the variable that appears in the wff &amp;quot;escaped(y)&amp;quot;--it binds another variable instead. Therefore we have the same problem that arose for Kaplan's representation-it is not clear how to build a representation for the belief sentence from the representations of its constituents.</Paragraph>
    <Paragraph position="2"> The choice of bound variables is arbitrary, and the choice of dummy variables is equally arbitrary. Thus, there is an obvious solution: let the de re arguments and the dummy variables be the same. Thus, the wide scope reading for &amp;quot;John believes a prisoner escaped&amp;quot; is not (28), but (29) some(x,prisoner(x),believe(john,\[xl,q(escaped(x)))).</Paragraph>
    <Paragraph position="3"> Here the variable x serves two purposes--it is a de re argument, and also a dummy variable. When it occurs as a de re argument, it is bound by the quantifier in the usual way. When it occurs as a dummy variable, it is definitely not bound by the quantifier. In fact the dummy variable is a mention of the variable x, not a use, because it occurs under a quotation mark.</Paragraph>
    <Paragraph position="4"> Formula (29) may be a little confusing, since the same variable appears twice with very different semantics. This formula has a major advantage over formula (28), however--it contains the wff &amp;quot;escaped(x)&amp;quot; and a quantifier that binds the free variable of that wff. Since these are precisely the semantic features of the clause &amp;quot;a prisoner escaped,&amp;quot; it is fairly easy to build the logical form (29) from the sentence &amp;quot;John believed a prisoner escaped.&amp;quot; We can describe this technique as a convention governing the logical forms that our grammar assigns to English phrases. In any wff of the form believe(x,\[tl * * * t~\], q(p)), the nth de re argument is equal to its own dummy variable.</Paragraph>
    <Paragraph position="5"> Then the nth de re argument t~ is equal to the nth free variable ofp. In other words, the list \[t I ... t~\] is just a list of the free variables of p in order of occurrence. The same convention holds for all predicates that represent attitudes.</Paragraph>
    <Paragraph position="6"> Finally, note that the convention holds only for the logical forms that the grammar assigns to sentences. Once the grammar has built a logical form, inference procedures can fi:eely violate the convention. For example, consider the logical form of the sentence &amp;quot;Every man believes that Mary loves him&amp;quot;: (30) all(x,man(x),believe(x, \[x\],q(love(mary,x)))).</Paragraph>
    <Paragraph position="7"> From this sentence and the premise man(bill) we can infer (31) believe(bill, \[bill\],q(love(mary,x))) by substituting for a universal variable as usual. The occurrence of the variable under the quotation mark is naturally unaffected, because it is not a free occurrence of x.</Paragraph>
  </Section>
  <Section position="7" start_page="0" end_page="0" type="metho">
    <SectionTitle>
1.5 SELF-REFERENCE AND PARADOX
</SectionTitle>
    <Paragraph position="0"> Other writers (cited above) have already expounded and defended sentential theories of attitudes. This paper takes a sentential theory as a starting point, and aims to solve certain problems about the semantics of attitude reports in such a theory. However, one problem about sentential theories deserves discussion. The results of Montague (1974b) have been widely interpreted as proof that sentential theories of attitudes are inconsistent and therefore useless. Montague did indeed show that certain sentential theories of knowledge produce self-reference paradoxes, and are therefore inconsistent. However, he did not show that these were the only possible sentential theories. Recently des Rivi6res and Levesque (1986) have constructed sentential theories without self-reference and proved them consiistent. Thus they showed that while Montague's theorem was true, its significance had been misunderstood.</Paragraph>
    <Paragraph position="1"> Perlis (1988) has shown that if we introduce self-reference into a modal theory, it too can become inconsistent. In short, there is no special connection between sentential theories and paradoxes of self-reference. A sentential theory may or may not include self-reference; a modal theory may or may not include self-reference; and in either case, self-reference can lead to paradoxes.</Paragraph>
    <Paragraph position="2"> Kripke (1975) has shown that even the most commonplace utterances can create self-reference if they occur in unusual circumstances. Therefore the problem is not to avoid self-reference, but to understand it. The problem for advocates of sentential theories is to find a sentential analysis of the self-reference paradoxes that is, if not wholly satisfactory, at least as good as nonsentential analyses. For the purposes of AI, a successful analysis must avoid paradoxical conclusions, without sacrificing axioms or rules of inference that have proved useful in AI programs.</Paragraph>
    <Paragraph position="3"> One idea is that ordinary human intuitions about self-reference are inconsistent. To most people, it appears that the ..sentence &amp;quot;This statement is false&amp;quot; must be both true and false, yet it cannot be both. The only error in the formal analyses is that having derived a contradiction, they allow us to derive any conclusion whatever. This happens because 218 Computational Linguistics Volume 16, Number 4, December 1990 Andrew R. Haas Sentential Semantics for Propositional Attitudes standard logic allows no inconsistent theories except trivial ones, containing every sentence of the language. Therefore we need a new kind of logic to describe the inconsistent intuitions of the ordinary speaker. Priest (1989) attempted this--he constructed an inconsistent but nontrivial theory of truth using a paraconsistent logic. Priest's theory includes the T-scheme, written in our notation as (32) P ~ true(q(P)).</Paragraph>
    <Paragraph position="4"> P is a meta-variable ranging over sentences of the language. Tarski (1936) proposed this scheme as capturing an essential intuition about truth. Unfortunately, the rule of modus ponens is invalid in Priest's system, which means that most of the standard AI reasoning methods are invalid.</Paragraph>
    <Paragraph position="5"> Priest considers various remedies for this problem.</Paragraph>
    <Paragraph position="6"> Another approach is to look for a consistent theory of self-reference. Such a theory will probably disagree with speakers' intuitions for paradoxical examples like &amp;quot;This statement is false.&amp;quot; Yet these examples are rare in practice, so a natural language program using a consistent theory of self-reference might agree with speakers' intuitions in the vast majority of cases. Kripke (1975) proposed such a theory, based on a new definition of truth in a model--an alternative to Tarski's definition. Kripke's definition allows truth-value gaps: some sentences are neither true nor false.</Paragraph>
    <Paragraph position="7"> Suppose P is a sentence; then the sentence true(q(P)) is true iff P is true, and false iff P is false. Therefore if P is neither true nor false, true(q(P)) also has no truth value. In other respects, Kripke's definition of truth resembles Tarski's--it assigns the same truth values to sentences that do not contain the predicate &amp;quot;true,&amp;quot; and it never assigns two different truth values to one sentence. Suppose that a model of this kind contains a sentence that says &amp;quot;I am not true.&amp;quot; Formally, suppose the constant c denotes the sentence --7 true(c). What truth value can such a sentence have under Kripke's definition? Just as in standard logic, m true(c) is true iff true(c) is false. True(c) in turn is false iff c is false. Since c is the sentence m true(c), we have shown that c is true iff c is false. Since no sentence has two truth values, it follows that c has no truth value.</Paragraph>
    <Paragraph position="8"> Once again, problems arise because the system is too weak. If P is a sentence with no truth value, then the sentence P V m P has no truth value, even though it is a tautology of first-order logic. One remedy for this appears in the system of Perlis (1985). Perlis considers a first-order model M containing a predicate &amp;quot;true,&amp;quot; whose extension is the set of sentences that are true in M by Kripke's definition. He accepts as theorems all sentences that are Tarski-true in every model of this kind. Thus Perlis's system uses two notions of truth: P is a theorem only if P is Tarski-true, but true(q(P)) is a theorem only if P is Kripke-true.</Paragraph>
    <Paragraph position="9"> Suppose we have P ~ m true(q(P)); then Perlis's system allows us to prove both P and m true(q(P)). This certainly violates the intuitions of ordinary speakers, but such violations seem to be the inevitable price of a consistent theory of self-reference. Perlis devised a proof system for such models, using standard first-order proof and an axiom schema GK for the predicate &amp;quot;true.&amp;quot; Perlis proved that if L is any consistent set of first-order sentences that does not mention the predicate &amp;quot;true,&amp;quot; then the union of L and GK has a model M in which the extension of &amp;quot;true&amp;quot; is the set of sentences that are Kripke-true in M. Perlis's system has one important advantage over Kripke's: since the formalism is just a standard first-order theory, we can use all the familiar first-order inference rules. In this respect, Perlis's system is better suited to the needs of AI than either Kripke's or Priest's. However, it still excludes some inferences that are standard in everyday reasoning. For example, we have true(q(P)) ~ P for every P, but P ~ true(q(P)) is not a theorem for certain sentences P--in particular, sentences that are self-referential and paradoxical.</Paragraph>
    <Paragraph position="10"> An adequate account of self-reference must deal not only with the Liar, but also with paradoxes arising from propositional attitudes--for example, the Knower Paradox (Montague and Kaplan 1974), and Thomason's paradox about belief (Thomason 1980). Perlis (1988) has considered the treatment of attitudes within his system, and Asher and Kamp (1986) have treated both paradoxes using ideas akin to Kripke's (their treatment is not sentential, but they claim that it could be extended to a sentential treatment).</Paragraph>
    <Paragraph position="11"> Let us briefly consider the treatment of the Knower paradox within Perlis's system. To simplify the treatment, we will assume that knowledge is true belief. If we are working in Perlis's system, this naturally means that knowledge is Kripke-true belief. We write &amp;quot;the agent knows that P&amp;quot; as true(q(P)) A believe(q(P)). The paradox arises from a sentence R that says &amp;quot;The agent knows --hR.&amp;quot; Formally,  (33) R ,---, (true(q(mR)) A believe(q(mR))).</Paragraph>
    <Paragraph position="12"> Since true(q(mR)) ~ mR is a theorem of Perlis's system, (33) implies --hR. Now suppose that the agent believes (33); then with modest powers of inference the agent can conclude mR, so we have believe(qmR). Combining this with (33) gives (34) R ~ true(q(mR)),  which at once implies that --nR is not Kripke-true. It follows that although mR is a theorem of the system, and the agent believes it, the agent does not know it--because it is not Kripke-true, and only a sentence that is Kripke-true can be known. The Knower paradox arises if we insist that the agent does know mR. This example brings out a counter-intuitive property of Perlis's system: a sentence may follow directly from Perlis's axioms, yet he refuses to call it true, or to allow that any agent can know it. Strange though this appears, it is a natural consequence of the use of two definitions of truth in a single theory.</Paragraph>
    <Paragraph position="13"> Belief is different from knowledge because it need not be true. This makes it surprising that Thomason's paradox involves only the notion of belief, not knowledge or truth. In fact the paradox arises exactly because Thomason's agent thinks that all his beliefs are true. This is stated as (35) a(&lt; a(&lt; C/ &gt;) --~ ~ &gt;) Computational Linguistics Volume 16, Number 4, December 1990 219 Andrew R. Haas Sententiai Semantics for Propositional Attitudes (Thomason 1980). The notation is as follows: ~P is a variable ranging over all formulas of the language, &lt; ~P &gt; is a constant denoting (the G6del number of) ~P, and a(&lt; ~P &gt;) means that the agent believes ~P. This axiom says that for every formula ~o, the agent believes (36) a(&lt; ~o &gt;) ~ ~p.</Paragraph>
    <Paragraph position="14"> This sentence says that if the agent believes ~o, ~o must be true. Since ~o ranges over all sentences of the language, the agent is claiming that his beliefs are infallible. This leads the agent into a paradox similar to the Knower, and his beliefs are therefore inconsistent. Asher and Kamp showed that one can avoid this conclusion by denying (35) in certain cases where 4~ is a self-referential sentence. Another alternative is to dismiss (35) completely. It is doubtful that human beings consider their own beliefs infallible, and Perlis (1986) has argued that a rational agent may well believe that some of his or her beliefs are false.</Paragraph>
    <Paragraph position="15"> We have looked at three sentential analyses of the self-reference paradoxes, and each one sacrifices some principle that seems useful for reasoning in an AI program. The alternative is an analysis in which propositions are not sentences. Thomason (1986) considers such analyses and finds that they have no clear advantage over the sentential approaches. The unpleasant truth is that paradoxes of self-reference create equally serious problems for all known theories of attitudes. It follows that they provide no evidence against the sentential theories.</Paragraph>
  </Section>
  <Section position="8" start_page="0" end_page="0" type="metho">
    <SectionTitle>
2 THE BASIC GRAMMAR
2.1 NOTATION
</SectionTitle>
    <Paragraph position="0"> The rules of our grammar are definite clauses, and we use the notation of definite clause grammar (Pereira and Warren 1980). This notation is now standard among computer scientists who study natural language and is explained in a textbook by Pereira and Shieber (1987). Its advantages are that it is well defined and easy to learn, because it is a notational variant of standard first-order logic. Also, it is often straightforward to parse with grammars written in this notation (although there can be no general parsing method for the notation, since it has Turing machine power). DCG notation lacks some useful devices found in linguistic formalisms like GPSG--there are no default feature values or general feature agreement principles (Gazdar et al. 1985). On the other hand, the declarative semantics of the DCG notation is quite clear--unlike the semantics of GPSG (Fisher 1989).</Paragraph>
    <Paragraph position="1"> The grammar is a set of meta-language sentences describing a correspondence between English words and sentences of the target language. Therefore, we must define a notation for talking about the target language in the metalanguage. Our choice is a notation similar to that of Haas (1986). If f is a symbol of the target language, 'f is a symbol of the meta-language. Suppose f is a constant or a variable, taking no arguments. Then 'f denotes f. Thus 'john is a meta-language constant that denotes a target-language constant, while 'x is a meta-language constant that denotes a target-language variable. Suppose f is a functor of the target language and takes n arguments. Then 'f is a meta-language function letter, and it denotes the function that maps n expressions of the target language el * * * en to the target-language expression f(el.., en). Thus 'not is a meta-language function letter, and it denotes the function that maps a target language wff to its negation. In the same way, 'or is a meta-language function letter, and it denotes the function that maps two target-language wffs to their disjunction.</Paragraph>
    <Paragraph position="2"> Given these denotations, it is easy to see that if p(a,b) is an atomic sentence in the target language, then 'p('a,'b) is a term in the meta-language, and it denotes the wff p(a,b) in the target language. Suppose that Wffl and Wff2 are meta-language variables ranging over wffs of the target language. Then 'or(Wffl,Wff2) is a meta-language term, and since the variables Wffl and Wff2 range over all wffs of the target language, the value of 'or(Wffl,Wff2) ranges over all disjunctions in the target language. These ideas about the relation between meta-language and target language are not new or difficult, but it is worth the time to explain them, because some influential papers about semantics in unification grammar have confused the target language and meta-language (see Section 2.4). For the sake of legibility, we omit the quotation marks--so when or(Wffl,Wff2) appears in a rule of the grammar, it is an abbreviation for 'or(Wffl,Wff2).</Paragraph>
    <Section position="1" start_page="0" end_page="0" type="sub_section">
      <SectionTitle>
2.2 REPRESENTING QUANTIFIERS
</SectionTitle>
      <Paragraph position="0"> Noun phrases in the grammar contribute to logical form in two ways, and therefore they have two semantic features.</Paragraph>
      <Paragraph position="1"> The first feature is a variable, which becomes a logical argument of a verb. This produces a wiT, in which the variaN, e appears free. The second feature is a quantifier that binds the variable. By applying the quantifier to the wff, we eliminate free occurrences of that particular variable. After applying all the quantifiers, we have a wff without free variables--a sentence. This is the logical form of an utterance.</Paragraph>
      <Paragraph position="2"> In Montague's system (Montague 1974a), the logical form of an NP is an expression denoting a quantifier. This kind of analysis is impossible in our system, because the target language is first-order. It contains no expressions that denote quantifiers. Therefore the representation of an NP cannot be an expression of the target language. Instead of using Montague's approach, we associate with every quantifier a function that maps wffs to wffs. For the NP &amp;quot;every man,&amp;quot; we have a function that maps any wff Wffl to the wff (37) all(V,man(V),Wffl) where V is a variable of the target language. Notice that if we took Montague's representation for the quantified NP, applied it to the lambda expression lambda(V,Wffl), and then simplified, we would get an alphabetic variant of (37). 220 Computational Linguistics Volume 16, Number 4, December 1990 Andrew R. Haas Sentential Semantics for Propositional Attitudes We will call this function the application function for the quantified NP.</Paragraph>
      <Paragraph position="3"> To represent application functions in a unification grammar, we use a device from Pereira and Warren (1980). We assign to each NP an infinite set of readings----one for each ordered pair in the extension of the application function. The first and second elements of the ordered pair are semantic features of the NP, and the bound variable of the quantifier is a third feature. For the NP &amp;quot;every man&amp;quot; we have (38) np(V,Wffl,all(V,man(V),Wffl)) ---- \[every man\].</Paragraph>
      <Paragraph position="4"> This says that for any variable V and wff Wffl, the string &amp;quot;every man&amp;quot; is an NP, and if it binds the variable V, then the pair \[Wffl,all(V,man(V),Wffl)\] is in the extension of its application function. It follows that the application function maps Wffl to the wff all(V,man(V),Wffl). When other rules fix the values of the variables V and Wffl, the result of the mapping will be fixed as well. A more complex example is (39) np(V,Wffl,and(some(V,man(V),Wffl),some(V, woman(V),Wffl))) ~ \[a man and a woman\].</Paragraph>
      <Paragraph position="5"> Here the application function's output includes two copies of the input.</Paragraph>
      <Paragraph position="6"> It is important to consider the declarative semantics of these rules. Each one states that a certain NP has an infinite set of possible readings, because there are infinitely many wffs in the target language. Thus we might say that the NP in isolation is infinitely ambiguous. This &amp;quot;ambiguity&amp;quot; is purely formal, however; in any actual utterance the value of the variable Wffl will be supplied by other rules, so that in the context of an utterance the ambiguity is resolved. In the same way, the VP &amp;quot;liked Mary&amp;quot; is ambiguous in person and number--but in the context of the utterance &amp;quot;John liked Mary,&amp;quot; its person and number are unambiguous.</Paragraph>
      <Paragraph position="7"> In one respect the declarative semantics of these rules is not quite right. The variable V is supposed to range over variables of the target language, and the variable Wffl is supposed to range over wffs of the target language. Yet we have not defined a type system to express these range restrictions. However, such a type system could be added, for example, using the methods of Walther (1987). In fact, the type hierarchy would be a tree, which allows us to use a simplified version of Walther's methods. For brevity's sake we will not develop a type system in this paper. Except for this omission, the declarative semantics of the above rules is quite clear.</Paragraph>
      <Paragraph position="8"> Typed variables have mnemonic value even if we do not use a typed logic. Therefore we adopt the following conventions. The meta-language variables V, V0, V1 ... range over target language variables. Wff, Wffl, Wff2... range over target language wffs. Q, Q1, Q2 ... range over quantifiers. QL, QL1, QL2 ... range over lists of quantitiers. When a wff forms the range restriction of a quantifier, we will sometimes use the variables Range, Rangel... for that wiT.</Paragraph>
    </Section>
    <Section position="2" start_page="0" end_page="0" type="sub_section">
      <SectionTitle>
2.3 SCOPING AND QUANTIFIER STORAGE
</SectionTitle>
      <Paragraph position="0"> Given a means of describing quantifiers, we must consider the order of application. Cooper (1983) has shown how to allow for different orders of application by adding to NPs, VPs, and sentences an extra semantic feature called the quantifier store. The store is a list of quantifiers that bind the free variables in the logical form of the phrase. The grammar removes quantifiers from the store and applies them nondeterministically to produce different logical forms, corresponding to different orders of application. If a sentence has a logical form p and a quantifier store 1, then every free variable in p must be bound by a quantifier in /--otherwise the final logical form would contain free variables.</Paragraph>
      <Paragraph position="1"> Our treatment of quantifier storage is different from Cooper's in two ways. First, Cooper's grammar maps phrases to model-theoretic denotations, not logical forms.</Paragraph>
      <Paragraph position="2"> This sounds like a bigger difference than it is. The basic technique is to put quantifiers in a store, and use some kind of marker to link the stored quantifiers to the argument positions they must bind. Whether we work with the logical forms or with their denotations, much the same problems arise in applying this technique.</Paragraph>
      <Paragraph position="3"> A second difference is that in Cooper's grammar, each NP has two readings---one in which the NP's quantifier is in the store, and one in which it is not. The first reading leads to wide-scope readings of the sentence, while the second leads to narrow-scope readings. In our grammar only the first kind of reading for an NP exists--that is, the quantifier of an NP is always in the store. We generate both wide- and narrow-scope readings by applying the quantitiers from the store in different orders.</Paragraph>
      <Paragraph position="4"> We represent a quantifier as a pair p(Wffl,Wff2), where the application function of the quantifier maps Wffl to Wff2. We represent a quantifier store as a list of such pairs. The predicate apply_quants(QLI,Wffl,QL2,Wff2) means that QL1 is a list of quantifiers, Wffl is a wff, Wff2 is the result of applying some of the quantifiers in QL1 to Wffl, and QL2 contains the remaining quantifiers. The first axiom for the predicate says that if we apply none of the quantifiers, then QL2 = QL1 and Wff2 = Wffl: (40) apply_quants(QL,Wff, QL,Wff).</Paragraph>
      <Paragraph position="5"> The second axiom uses the predicate choose(L1,X,L2), which means that X is a member of list L1, and L2 is formed by deleting one occurrence of X from L1.</Paragraph>
      <Paragraph position="7"> Consider the first literal on the right side of this rule. It says that p(Wffl,Wff2) is a member of QL1, and deleting p(Wffl,Wff2) from QL1 leaves QL2. By definition, if the pair p(Wffl,Wff2) is in the extension of the application function for a certain quantifier, the application function Computational Linguistics Volume 16, Number 4, December 1990 221 Andrew R. Haas Sentential Semantics for Propositional Attitudes maps Wffl to Wff2. The second literal says that applying a subset of the remaining quantifiers QL2 to Wff2 gives a new wff Wff3 and a list QL3 of remaining quantifiers. Then applying a subset of QL1 to Wffl gives Wff3 with remaining quantifiers QL3.</Paragraph>
      <Paragraph position="8"> Suppose that QL1 is (42) \[p(Wffl ,all(V 1 ,man(V 1 ),Wffl)),p(Wff2,some(V2, woman(V2),Wff2) )\].</Paragraph>
      <Paragraph position="9"> Then solutions for the goal</Paragraph>
      <Paragraph position="11"> These solutions will be used to build wide-scope readings for propositional attitude reports.</Paragraph>
    </Section>
  </Section>
  <Section position="9" start_page="0" end_page="0" type="metho">
    <SectionTitle>
2.4 THE PROBLEM OF ASSIGNING DISTINCT
VARIABLES TO QUANTIFIERS
</SectionTitle>
    <Paragraph position="0"> The rules we have given so far do not tell us which target language variables the quantifiers bind. These rules contain meta-language variables that range over target language variables, rather than meta-language constants that denote particular variables of the target language. In choosing the bound variables it is sometimes crucial to assign distinct variables to different quantifiers. The logical form of&amp;quot;Some  man loves every woman&amp;quot; can be (47) some(x,man(x),all(y,woman(y),loves(x,y))) but it cannot be (48) some(y,man(y),all(y,woman(y),loves(y,y))).</Paragraph>
    <Paragraph position="1">  This reading is wrong because the inner quantifier captures the variables that are supposed to be bound by the outer quantifier. To be more precise: the outer quantifier binds the variable y, but not all occurrences of y in the scope of the outer quantifier are bound by the outer quantifier. Some of them are bound instead by the inner quantifier. In this situation, we say that the inner quantifier shadows the outer one. We require that no quantifier ever shadows another in any logical form built by the grammar. This requirement will not prevent us from finding logical forms for English sentences, because any first-order sentence is logically equivalent to a sentence without shadowing.</Paragraph>
    <Paragraph position="2"> The same problem arises in cases of quantification into the ,;cope of attitudes. Consider the sentence &amp;quot;John thinks some man loves every woman,&amp;quot; and suppose that &amp;quot;some man&amp;quot; has wide scope and &amp;quot;every woman&amp;quot; has narrow scope. The logical form can be</Paragraph>
    <Paragraph position="4"> thinks(john,\[y\],q(all(y,woman(y),loves(y,y))))).</Paragraph>
    <Paragraph position="5"> In this formula, the inner quantifier captures a variable that is supposed to be a dummy variable. In this case also, we say that the inner quantifier shadows the outer one.</Paragraph>
    <Paragraph position="6"> Pereira and Warren (1980) prevented shadowing by using Prolog variables to represent variables of the object language. Thus, their translation for &amp;quot;Some man loves every woman&amp;quot; is</Paragraph>
    <Paragraph position="8"> where X and Y are Prolog variables. This works, but it violates the declarative semantics of Prolog. According to that semantics every variable in an answer is universally quantified. Thus if Prolog returns (51) as a description of the logical form of a sentence, this means that for all values of X and Y the expression (51) denotes a possible logical form tbr that sentence. This means that if v is a variable of the object language, then</Paragraph>
    <Paragraph position="10"> is a possible translation, which is clearly false. Thus, according to the declarative interpretation, Pereira and Warren's grammar does not express the requirement that no quantifier can shadow another quantifier. Pereira and Shieber (198;7) pointed out this problem and said that while formally incorrect the technique was &amp;quot;unlikely to cause problems.&amp;quot; Yet on p. 101 they describe the structures built by their grammar as &amp;quot;unintuitive&amp;quot; and even &amp;quot;bizarre.&amp;quot; This confirms the conventional wisdom: violating the declarative ,;emantics makes logic programs hard to understand.</Paragraph>
    <Paragraph position="11"> Therefore, let us look for a solution that is formally correct.</Paragraph>
    <Paragraph position="12"> Warren (1983) suggested one possible solution. We can use a global counter to keep track of all the variables used in the logical form of a sentence, and assign a new variable to every quantifier. Then no two quantifiers would bind the same variable, and certainly no quantifier would shadow another. This solution would make it easier to implement our treatment of de re attitude reports, but it would also create, serious problems in the treatment of NP conjunction 222 Compultational Linguistics Volume 16, Number 4, December 1990 Andrew R. Haas Sentential Semantics for Propositional Attitudes and disjunction (see Section 2.5). Therefore we consider another possibility.</Paragraph>
    <Paragraph position="13"> Let us rewrite the definition of &amp;quot;apply_quants,&amp;quot; adding the requirement that each quantifier binds a variable that is not bound in the scope of that quantifier. For each integer N, let v(N) be a variable of the target language. If N is not equal to M, then v(M) and v(N) are distinct variables. We represent the integers using the constant 0 and the function &amp;quot;s&amp;quot; for &amp;quot;successor&amp;quot; in the usual way. The predicate highest_bound_var(Wffl,N) means that N is the largest number such that v(N) is bound in Wffl. To define this predicate, we need one axiom for each quantifier, connective, and predicate letter of the target language. These axioms are obvious and are therefore omitted.</Paragraph>
    <Paragraph position="14"> We also need the predicate binds(Wffl,V), which means that the outermost quantifier of Wffl binds the variable V.</Paragraph>
    <Paragraph position="15"> To define this predicate we need an axiom for each quanti- null fier and connective. Typical axioms are: (53) binds(all(V,Wffl,Wff2),V).</Paragraph>
    <Paragraph position="16"> (54) binds(and(Wffl,Wff2),V) :- binds(Wffl,V).</Paragraph>
    <Paragraph position="17">  The second axiom applies to complex quantifiers arising from conjoined NPs. In this case there are two branches, but each branch binds the same variable (the rules for NP conjunction ensure that this is so). Therefore, we recursively check the first branch to find the bound variable. Given these predicates, we can rewrite the second axiom for &amp;quot;apply_quants&amp;quot;:</Paragraph>
    <Paragraph position="19"> Wffl is the scope of the quantifier, and v(N) is the highest bound variable of Wffl. The new quantifier binds the variable v(s(N)), which is different from every bound variable in the scope Wffl. Therefore, the new quantifier is not shadowed by any lower quantifier.</Paragraph>
    <Paragraph position="20"> As an example, suppose that QL1 is (56) \[p(Wff2,all(V2,woman(V2),Wff2))\].</Paragraph>
    <Paragraph position="21"> Then solutions for the goal</Paragraph>
    <Paragraph position="23"> (We have reverted to standard notation for integers.) Suppose that QL1 is</Paragraph>
    <Paragraph position="25"> Then solutions for the goal</Paragraph>
    <Paragraph position="27"> The inner quantifier binds the variable v(1), and the outer quantifier binds the variable v(2). This notation for variables is very hard to read, so in the rest of the paper we will use the constants x, y, and z to represent variables of the target language.</Paragraph>
  </Section>
  <Section position="10" start_page="0" end_page="0" type="metho">
    <SectionTitle>
2.5 RULES FOR NOUN PHRASES
</SectionTitle>
    <Paragraph position="0"> The following grammar is very similar to the work of Pereira and Shieber (1987, Sections 4.1 and 4.2). There are two major differences, however, First, the treatment of quantifiers and scoping uses a version of Cooper's quantifier storage, instead of the &amp;quot;quantifier tree&amp;quot; of Pereira and Shieber. Second, Pereira and Shieber started with a semantics using lambda calculus, which they &amp;quot;encoded&amp;quot; in Prolog. In the present grammar, unification semantics stands on its own--it is not a way of encoding some other formalism. null Formula numbering uses the following conventions. The rules of the grammar are numbered (R1), (R2), etc. Entries in the lexicon are numbered (L 1), (L2), etc. Formulas built in the course of a derivation get numbers without a prefix. Groups of related rules are marked by lower case letters: (Lla), (Llb), and so forth.</Paragraph>
    <Paragraph position="1"> Every noun phrase has a quantifier store as one of its semantic features. If the NP is a gap, the store is empty; if the NP is not a gap, the first element of the store is the quantifier generated by the NP (in the present grammar, the quantifier store of an NP has at most one quantifier).</Paragraph>
    <Paragraph position="2"> We represent the quantifier store as a difference list, using the infix operator &amp;quot;-&amp;quot;. Thus if L2 is a tail of L1, L1-L2 is the list difference of L 1 and L2: the list formed by removing L2 from the end of L1. Therefore a noun phrase has the form np(V,QL1-QL2,Fx-Fy,VL). V is the bound variable of the NP. QL1-QL2 is the quantifier store of the NP. We describe wh-movement using the standard gap-threading technique (Pereira and Shieber 1987), and Fx-Fy is the filler list. Finally, VL is a list of target-language variables representing NPs that are available for reference by a pronoun, which we will call the pronoun reference list.</Paragraph>
    <Paragraph position="3"> Consider an NP consisting of a determiner and a head noun: &amp;quot;every man,&amp;quot; &amp;quot;no woman,&amp;quot; and so forth. The head noun supplies the range restriction of the NP's quantifier, and the determiner builds the quantifier given the range restriction. The bound variable of the NP is a feature of both the determiner and the head noun. Then the following rule generates NPs consisting of a determiner and a head  Thus &amp;quot;every man&amp;quot; is an NP that binds the variable V and maps Wffl to all(V,man(V),Wffl).</Paragraph>
    <Paragraph position="4"> Following Moore (1988), we interpret the proper name &amp;quot;John&amp;quot; as equivalent to the definite description &amp;quot;the one</Paragraph>
    <Paragraph position="6"> These axioms use the constant &amp;quot;john&amp;quot; to denote both a terminal symbol of the grammar and a constant of the target language--a convenient abuse of notation. Using (Lla) we get (64) np(V, \[p(Wffl,unique(V,name(V,john),Wffl))\[ QL\]-QL,Fx-Fx,VL)---, \[john\].</Paragraph>
    <Paragraph position="7"> That is, &amp;quot;john&amp;quot; is an NP that binds the variable V and maps Wffl to the wff unique(V,name(V,john),Wffl).</Paragraph>
    <Paragraph position="8"> Pronouns use the &amp;quot;let&amp;quot; quantifier. We have</Paragraph>
    <Paragraph position="10"> ---- \[he\], {member(V,VL)}.</Paragraph>
    <Paragraph position="11"> If V is a variable chosen from the pronoun reference list VL, then &amp;quot;he&amp;quot; is an NP that binds the variable V2 and maps Wffl to let(V2,V,Wffl). Thus, the pronoun refers back to a noun phrase whose bound variable is V. Later, we wi',\] see the rules that put variables into the list VL. As an example, we have</Paragraph>
    <Paragraph position="13"> The &amp;quot;let&amp;quot; quantifier in pronouns looks redundant, but it is useful because it makes the semantics of NPs uniform-ew~ry NP (except gaps) has a quantifier. This is helpful in describing conjoined NPs. Suppose that NP1 binds variable V and maps Wffl to Wff2. Suppose NP2 also binds variable V and maps the same Wffl to Wff3. Then the conjunction of NP1 and NP2 binds V and maps Wffl to  where V1 is chosen from the pronoun reference list. Thus the conjunction rule works for pronouns and proper nouns exactly as it does for NPs with determiners. There is a similar rule for disjunction of NPs.</Paragraph>
    <Paragraph position="14"> In conjoining two NPs we combine their quantifiers, which are the first elements of their quantifier stores. We must also collect the remaining elements of both quantifier stores. The above rule achieves this result by concatenating difference lists in the usual way: if QL1-QL2 is the tail of the first NP's quantifier list, and QL2-QL3 is the tail of the second NP's quantifier list, then QL 1-QL3 is the concatenation of the tails. In the present grammar both tails are empty, because the quantifier store of an NP contains at most one quantifier, but in a more general grammar the tails might contain quantifiers--for example, quantifiers from prepositional phrases modifying the NP. Thus the Montague-style semantics for NP conjunction and disjunction requires an extension of standard Cooper storage. When the quantifier store of an NP contains several quantitiers, we must be able to identify the one that represents the NP itself (as opposed to quantifiers that arise from attached PPs, for example). We must then be able to remove this quantifier from the store, build a new quantifier, and put the new quantifier back into the store.</Paragraph>
    <Paragraph position="15"> Rule (R6) requires that the two NPs being conjoined should have quantifiers that bind the same variable. Suppose we had chosen the bound variables in the logical forms by using a global counter to ensure that no two quantifiers ever bind the same variable (as suggested in Section 2.4).</Paragraph>
    <Paragraph position="16"> Then (R6) could never apply. Thus our treatment of NP conjunction forces us to choose the bound variables after the quantifiers from conjoined NPs have been combined into a single quantifier, as described in Section 2.4. This choice in turn creates difficulties in implementing our treatment of de re attitude reports, as we will see in Section 3.1.</Paragraph>
    <Paragraph position="17"> In this grammar, a conjunction of quantified NPs produces a logical form in which the two quantifiers are in separate wffs, and these wffs are joined by the connective and. Thus, neither quantifier is in the scope of the other.</Paragraph>
    <Paragraph position="18"> This gives the desired reading for a sentence such as &amp;quot;John has no house and no car&amp;quot;: (74) and(not(some(x,house(x),has(john,x))),not(some (x,car(x), has(john,x)))).</Paragraph>
    <Paragraph position="19"> However, consider the sentence &amp;quot;John met a farmer and his wife&amp;quot; and suppose the pronoun &amp;quot;his&amp;quot; refers to &amp;quot;a farmer.&amp;quot; Under our analysis, the quantifier from &amp;quot;a farmer&amp;quot; cannot bind a variable in the range restriction of the other quantitier--because its scope does not include the other quantifier. Thus, the Montagovian analysis of NP conjunction is certainly correct in some cases, but it cannot be the whole story.</Paragraph>
  </Section>
  <Section position="11" start_page="0" end_page="0" type="metho">
    <SectionTitle>
2.6 VERB PHRASE AND SENTENCE RULES
</SectionTitle>
    <Paragraph position="0"> Our grammar includes two kinds of transitive verbs: ordinary verbs like &amp;quot;eat&amp;quot; and &amp;quot;buy,&amp;quot; and propositional attitude verbs like &amp;quot;want&amp;quot; and &amp;quot;seek.&amp;quot; Only verbs of the second kind have de dicto readings. There is a de dicto reading for &amp;quot;John wants a Ferrari,&amp;quot; which does not imply that there is any particular Ferrari he wants. There is no such reading for &amp;quot;John bought a Ferrari.&amp;quot; To build a de dicto reading, a verb like &amp;quot;want&amp;quot; must have access to the quantifier of its direct object. Verbs like &amp;quot;buy&amp;quot; do not need this access. This leads to a problem that has been well known since Montague. The two kinds of verbs, although very different in their semantics, seem to be identical in their syntax. We would like to avoid duplication in our syntax by writing a single rule for VPs with transitive verbs. This rule must allow for both kinds of semantics.</Paragraph>
    <Paragraph position="1"> Montague's solution was to build a general semantic representation, which handles both cases. When the verb is &amp;quot;eat&amp;quot; or &amp;quot;buy,&amp;quot; one uses a meaning postulate to simplify the representation. Our solution is similar: we allow every transitive verb to have access to the quantifier of its direct object, and then assert that some verbs don't actually use the quantifier. However, our solution improves on Montague and Cooper by avoiding the simplification step. Instead, we build a simple representation in the first place.</Paragraph>
    <Paragraph position="2"> A verb has one feature, the subcategorization frame, which determines what arguments it will accept and what logical form it builds. The rule for verbs says that if a terminal symbol has a subcategorization frame Subcat, then it is a verb:</Paragraph>
    <Paragraph position="4"> A subcategorization frame for a transitive verb has the form (75) trans(V 1,V2,QL1,QL2,Wffl).</Paragraph>
    <Paragraph position="5"> V1 is a variable representing the subject, and V2 is a variable representing the object. QL1 is the quantifier store of the object. QL2 is a list of quantifiers remaining after the verb has built its logical form. For an ordinary transitive verb, QL2 equals QL 1. Wffl is the logical form of the verb. In the case of ordinary transitive verbs, we would like to assert once and for all that QL1 = QL2. Therefore, we</Paragraph>
    <Paragraph position="7"> This axiom says that for an ordinary transitive verb, the two lists of quantifiers are equal, and the values of the other features are fixed by the predicate &amp;quot;ordinary_trans.&amp;quot; We</Paragraph>
    <Paragraph position="9"> From (R7), (L2), and (L3a) we get (76) v(trans(Vl,V2,QL1,QLl,saw(Vl,V2)))~ \[saw\]. Computational Linguistics Volume 16, Number 4, December 1990 225 Andrew R. Haas Sentential Semantics for Propositional Attitudes The features of a verb phrase are a variable (representing the subject), a wff (the logical form of the VP), a quantifier store, a list of fillers, and a pronoun reference list. The rule for a verb phrase with a transitive verb is</Paragraph>
    <Paragraph position="11"> np(V2,QL1,Fx-Fy,L).</Paragraph>
    <Paragraph position="12"> If the verb is an ordinary transitive verb, then QL1 = QL2, so the quantifier store of the VP is equal to the quantifier store of the direct object. From (R1), (R3a), and (R2b) we have (77) np(V,\[p(Wffl,some(V,man(V),Wffl))l QL\]-QL,Fx-Fx,L) ----, \[a man\].</Paragraph>
    <Paragraph position="13"> Resolving (76) and (77) against the right side of R8 gives</Paragraph>
    <Paragraph position="15"> \[saw a man\].</Paragraph>
    <Paragraph position="16"> The quantifier store contains the quantifier of the NP &amp;quot;a man.&amp;quot; A sentence has four features: a wff, a quantifier store, a list of fillers, and a pronoun reference list. The rule for a declarative sentence is</Paragraph>
    <Paragraph position="18"> The variable V represents the subject, so it becomes the first argument of the VP. QL1-QL3 is the concatenation of the quantifier stores from the subject and the VP.</Paragraph>
    <Paragraph position="19"> &amp;quot;Apply_quants&amp;quot; will apply some of these quantifiers to the logical form of the VP to produce the logical form Wff2 of the sentence. The list QL4 of remaining quantifiers becomes the quantifier store of the sentence. We have  The., &amp;quot;apply_quants&amp;quot; subgoal has several solutions. Choosing the one in which &amp;quot;every woman&amp;quot; outscopes &amp;quot;a man,&amp;quot; we get (81) s(all(x,woman(x),some(y,man(y),saw(x,y))),QL3-QL3,Fx-Fx,L) ~ \[every woman saw a man\]. The; derivation is not yet complete, because &amp;quot;s&amp;quot; is not the start symbol of our grammar. Instead we use a special symbol &amp;quot;start,&amp;quot; which never appears on the right side of a rule. Thus, the start symbol derives only top-level sentenees--it cannot derive an embedded sentence. This is useful because top-level sentences have a unique semantic property: their logical forms must not contain free variables. It might seem that one can eliminate free variables simp\]',y by applying all the quantifiers in the store. Hobbs and Shieber (1987) pointed out that this is not so--it is essential to apply the quantifiers in a proper order. Consider the sentence &amp;quot;every man knows a woman who loves him,&amp;quot; with &amp;quot;him&amp;quot; referring to the subject. The subject quantifier binds a variable that occurs free in the range restriction of the object quantifier, so one must apply the object quantifier first in order to eliminate all free variables. null Therefore our grammar includes a filter that eliminates readings of top-level sentences containing free variables.</Paragraph>
    <Paragraph position="20"> Let free_vars(Wffl,L) mean that L is a list of the free variables of Wffl in order of first occurrence. We omit the easy definition of this predicate. The rule for top-level sentences is:</Paragraph>
    <Paragraph position="22"> The goal free_vars(Wffl,\[\]) filters out readings with free variables. The above rule allows us to complete the derivation for &amp;quot;every woman saw a man&amp;quot;: (82) start(all(x,woman(x),some(y,man(y),saw(x,y)))) \[every woman saw a man\].</Paragraph>
    <Paragraph position="23"> Having treated sentences, we can now consider gaps and relative clauses. The rule for gaps follows Pereira and Schieber (1987):</Paragraph>
    <Paragraph position="25"> This rule removes the marker gap(V) from the filler list, and makes the associated variable V the variable of the empty NP. The list difference QL-QL is the empty list, so the quantifier store of the gap is empty.</Paragraph>
    <Paragraph position="26"> The rule that generates NPs with relative clauses is</Paragraph>
    <Paragraph position="28"> --~ det(V,and(Range 1 ,Range2),Q), n(V,Rangel), \[that\], s(Range2,QL1-QLl,\[gap(V)\]-\[\],\[\]). 226 Comlmtational Linguistics Volume 16, Number 4, December 1990 Andrew R. Haas Sententiai Semantics for Propositional Attitudes The relative clause is a sentence containing a gap, and the logical form of the gap is the variable V--the same variable that the quantifier binds. The logical form of the relative clause becomes part of the range restriction of the quantifier. We have (83) s(some(x,pizza(x),ate(V,x)),QL-QL,\[gap(V)\[ Fx\]-Fx,\[\]) ~ \[ate a pizza\].</Paragraph>
    <Paragraph position="29"> The derivation of this sentence is much like the one for &amp;quot;every woman saw a man&amp;quot; above, except that in place of the subject &amp;quot;every woman&amp;quot; we have a gap as a subject. The rule for gaps above ensures that the variable of the gap is V, the only variable in the filler list, and its quantifier store is empty. Therefore, V appears as the first argument of the predicate &amp;quot;ate.&amp;quot; Continuing the derivation we get (84) np(V,\[p(Wffl,some(V,and(man(V),some(x,pizza (x),ate(V,x))),Wffl))\[ QL\]-QL,Fx-Fx,\[\]) ----\[a man that ate a pizza\].</Paragraph>
    <Paragraph position="30"> The string &amp;quot;a man that ate a pizza&amp;quot; is an NP that binds V and maps Wffl to the wff (85) some(V,and(man(V),some(x,pizza(x),ate(V,x))), Wffl).</Paragraph>
    <Paragraph position="31"> Notice that in the rule for NPs with relative clauses, the quantifier store of the relative clause is empty. This means that no quantifier can be raised out of a relative clause. Thus there is no scope ambiguity in &amp;quot;I saw a man that loves every woman.&amp;quot; According to Cooper (1979), this is correct. The restriction is easy to state because in our grammar, quantifier raising is combined with syntax and semantics in a single set of rules. It would be harder to state the same facts in a grammar like Pereira and Shieber's (1987), because quantifier raising there operates on a separate representation called a quantifier tree. This tree leaves out syntactic information that is needed for determining scopes--for example, the difference between a relative clause and a prepositional phrase.</Paragraph>
  </Section>
  <Section position="12" start_page="0" end_page="0" type="metho">
    <SectionTitle>
3 PROPOSITIONAL ATTITUDES IN THE
GRAMMAR
</SectionTitle>
    <Paragraph position="0"/>
    <Section position="1" start_page="0" end_page="0" type="sub_section">
      <SectionTitle>
3.1 ATTITUDE VERBS TAKING CLAUSES
</SectionTitle>
      <Paragraph position="0"> The following rule introduces verbs such as &amp;quot;believe&amp;quot; and &amp;quot;know,&amp;quot; which take clauses as their objects.</Paragraph>
      <Paragraph position="2"> s(Wff2,QLl,Fx,\[V1 \[L\]).</Paragraph>
      <Paragraph position="3"> The verb takes the logical form Wff2 of the object clause and the subject variable V1, and builds the wff Wffl representing the VP. This rule also adds the subject variable to the pronoun reference list of the object clause. For the verb &amp;quot;thought,&amp;quot; we have the following subcategorization frame:</Paragraph>
      <Paragraph position="5"> :- free_vars(Wffl,Varsl).</Paragraph>
      <Paragraph position="6"> The subject variable becomes the first argument of the predicate &amp;quot;thought.&amp;quot; The logical form of the object clause is Wffl, and it appears under a quotation mark as the third argument of &amp;quot;thought.&amp;quot; The second argument of &amp;quot;thought&amp;quot; is the de re argument list, and the predicate &amp;quot;free_vars&amp;quot; ensures that the de re argument list is a list of the free variables in Wffl, as required by our convention. From the rule (R8) for verbs and (L4), we get</Paragraph>
      <Paragraph position="8"> The &amp;quot;free_vars&amp;quot; subgoal has not been solved--it has been postponed. Indeed it must be postponed, because as long as its first argument is a variable, it has an infinite number of solutions--one for each wit of our language.</Paragraph>
      <Paragraph position="9"> Consider the example &amp;quot;John thought Mary ate a pizza.&amp;quot; We consider two readings. &amp;quot;A pizza&amp;quot; is understood de dicto in both readings, but &amp;quot;Mary&amp;quot; is de re in one reading and de dicto in the other. The ambiguity arises from the embedded sentence, because the predicate &amp;quot;apply_quants&amp;quot; can either apply the quantifiers or leave them in the store. If it applies the quantifiers from &amp;quot;Mary&amp;quot; and &amp;quot;a pizza&amp;quot; in their surface order, we get  --~ \[thought Mary ate a pizza\].</Paragraph>
      <Paragraph position="10"> Computational Linguistics Volume 16, Number 4, December 1990 227 Andrew R. Haas Sententiai Semantics for Propositional Attitudes If we combine this VP with the subject &amp;quot;John&amp;quot; we get a sentence whose logical form is (90) unique(z,name(z,john), thought(z, \[\] ,q(unique(y,name(y,mary),some (x,pizza(x),ate(y,x)))))).</Paragraph>
      <Paragraph position="11"> Now for the reading in which &amp;quot;mary&amp;quot; is de re. Once again, consider the embedded sentence &amp;quot;Mary ate a pizza.&amp;quot; Suppose that the predicate &amp;quot;apply_quants&amp;quot; applies the quantifier from &amp;quot;a pizza&amp;quot; and leaves the one from &amp;quot;Mary&amp;quot; in the store. We get  In this case, the first argument of &amp;quot;free_vars&amp;quot; contains the meta-language variable V1. Then the &amp;quot;free_vars&amp;quot; subgoal has an infinity of solutions---one in which V 1 = x and there are no free variables, and an infinite number in which V1 = y, for some y not equal to x, and the list of free variables is \[y\]. Therefore, it is necessary to postpone the &amp;quot;free_vars&amp;quot; subgoal once more. The standard technique for parsing DCGs does not allow for this postponing of subgoals, and this will create a problem for our implementation.</Paragraph>
      <Paragraph position="12"> This problem would be greatly simplified if we had chosen to assign a different variable to every quantifier by using a global counter. The DCG parser would work from left to right and assign a target-language variable to each NP as soon as it parsed that NP. In the above example, &amp;quot;Mary&amp;quot; and &amp;quot;a pizza&amp;quot; would both have their variables assigned by the time we reached the right end of the VP.</Paragraph>
      <Paragraph position="13"> Then we could handle the &amp;quot;free_vars&amp;quot; subgoals by rewriting the grammar as follows: remove the &amp;quot;free_vars&amp;quot; subgoals from the lexical entries for the attitude verbs, and place a &amp;quot;free_vars&amp;quot; subgoal at the right end of each VP rule that introduces an attitude verb (( R \[ 3), (R 15), and (R8)). This would ensure that when the parser attempted to solve the &amp;quot;free_vars&amp;quot; subgoal, its first argument would be a ground term. However, this solution would make it impossible to use the rule (R6) for NP conjunction (see Section 2.5). If we pick one solution for the problem of choosing bound variables, we have problems with NP conjunction; if we pick the other solution we get problems in implementing our analysis of de re attitude reports. This is the kind of difficulty that we cannot even notice, let alone solve, until we write formal grammars that cover a reasonable variety of phenomena.</Paragraph>
      <Paragraph position="14"> Continuing our derivation, we combine the VP with the subject &amp;quot;John,&amp;quot; apply the quantifier from &amp;quot;Mary,&amp;quot; and get</Paragraph>
      <Paragraph position="16"> Now the first argument of &amp;quot;free_vars&amp;quot; is a ground term, because applying the quantifier that arose from &amp;quot;Mary&amp;quot; includes choosing the target language variable that the quantifier binds. The &amp;quot;free_vars&amp;quot; subgoal now has only one solution, Varsl = \[y\]. Then the logical form of the sentence is (94) unique(z,name(z,john), unique(y,name(y,mary), thought(z,\[y\],q(some(x,pizza(x), ate(y,x)))))).</Paragraph>
      <Paragraph position="17"> This means that there is a term T that represents Mary to John, and John believes the sentence (95) some(x,pizza(x),ate(T,x)).</Paragraph>
      <Paragraph position="18"> For the sentence &amp;quot;John thought he saw Mary,&amp;quot; our limited treatment of pronouns allows only one reading, in which &amp;quot;he&amp;quot; refers to John. Using (R9), we get the following reading for the embedded clause:</Paragraph>
      <Paragraph position="20"> --~ \[he saw Mary\].</Paragraph>
      <Paragraph position="21"> The pronoun &amp;quot;he&amp;quot; gives rise to a &amp;quot;let&amp;quot; quantifier, which binds the variable y to V1, the first member of the pronoun reference list. From (R13), (86), and (96) we get  The VP rule (R13) unifies the subject variable V2 with the first element of the pronoun reference list of the embedded clause, so the &amp;quot;let&amp;quot; quantifier now binds the variable y to the: subject variable. Once again, we postpone the &amp;quot;free_vars&amp;quot; 228 Computational Linguistics Volume 16, Number 4, December 1990 Andrew R. Haas Sentential Semantics for Propositional Attitudes goal until its first argument is a ground term. Combining this VP with the subject &amp;quot;John&amp;quot; gives  The first argument of&amp;quot;free_vars&amp;quot; is now a ground term, and solving the &amp;quot;free_vars&amp;quot; subgoal gives Varsl = \[z\]. The logical form of the sentence is (99) unique(z,name(z,john), thought(z,\[z\],q(let(y,z, unique(x,name(x,mary), saw(y,x)))))).</Paragraph>
      <Paragraph position="22"> The dummy variable z stands for a term T that represents John to himself. Then John's belief looks like this: (100) let(y,T, unique(x,name(x,mary), saw(y,x))).</Paragraph>
      <Paragraph position="23"> If John simplifies this belief, he will infer (101 ) unique(x,name(x,mary),saw(T,x)).</Paragraph>
    </Section>
  </Section>
  <Section position="13" start_page="0" end_page="0" type="metho">
    <SectionTitle>
3.2 ATTITUDE VERBS TAKING A CLAUSE
WITH A GAP
</SectionTitle>
    <Paragraph position="0"> We proposed the following logical form for &amp;quot;John knows who Mary likes&amp;quot;: (102) some(x,person(x),know(john, \[x\],q(like(mary,x)))). The grammar will generate a similar logical form, except for the translations of the proper nouns. The existential quantifier comes from the word &amp;quot;who.&amp;quot; The rules for &amp;quot;who&amp;quot; and &amp;quot;what&amp;quot; are</Paragraph>
    <Paragraph position="2"> The semantic features of a wh word are a variable, and a list containing a quantifier that binds that variable.</Paragraph>
    <Paragraph position="3"> The following rule builds VPs in which the verb takes a wh word and a clause as its objects:</Paragraph>
    <Paragraph position="5"> The embedded S contains a gap, and the variable of that gap is the one bound by the quantifier from the wh word.</Paragraph>
    <Paragraph position="6"> The main verb takes the subject variable and the logical form of the embedded S and builds a wff Wff2. The rule finally calls &amp;quot;apply_quants&amp;quot; to apply the quantifier from the wh word to Wff2. &amp;quot;Apply_quants&amp;quot; can apply any subset of the quantifiers in its first argument, but the rule requires the output list of quantifiers to be empty, and this guarantees that the quantifier from the wh word will actually be applied. The resulting wff becomes the logical form of the VP.</Paragraph>
    <Paragraph position="7"> The rule requires a verb whose subcategorization frame has the form takes_wh(Vl,Wffl,Wff2). &amp;quot;Know&amp;quot; is such a</Paragraph>
    <Section position="1" start_page="0" end_page="0" type="sub_section">
      <SectionTitle>
3.3 ATTITUDE VERBS TAKING A NOUN PHRASE
</SectionTitle>
      <Paragraph position="0"> Finally, we consider an example with &amp;quot;want.&amp;quot; This verb is semantically very different from most transitive verbs, but syntactically it is an ordinary transitive verb, introduced by the rule already given:</Paragraph>
      <Paragraph position="2"> np(V2,QL1,Fx-Fy,L).</Paragraph>
      <Paragraph position="3"> The difference between &amp;quot;want&amp;quot; and other transitive verbs is in its subcategorization frame:</Paragraph>
      <Paragraph position="5"> free_vars(Wffl,Varsl).</Paragraph>
      <Paragraph position="6"> Resolving this rule against the verb rule (R7) gives the following rule for the verb &amp;quot;wants&amp;quot;:</Paragraph>
      <Paragraph position="8"> The quantifier list QL1 contains the quantifier from the object NP. The predicate &amp;quot;apply_quants&amp;quot; may or may not apply this quantifier to the wff have(VI,V2), and this nondeterminism gives rise to a de re~de dicto ambiguity. If &amp;quot;apply_quants&amp;quot; does not apply the object quantifier, then QL2 = QL1, so the object quantifier is passed up for later application. Otherwise, QL2 is the empty list. As usual, the predicate &amp;quot;free_vars&amp;quot; ensures that the de re arguments obey our convention.</Paragraph>
      <Paragraph position="9"> Consider the VP &amp;quot;wants a Porsche.&amp;quot; The object &amp;quot;a Po:rsche&amp;quot; has the following interpretation:</Paragraph>
      <Paragraph position="11"> Given this solution, the logical form of the VP is</Paragraph>
      <Paragraph position="13"> where V1 is the subject variable and the &amp;quot;free_vars&amp;quot; sub-goal has been postponed. We can combine this VP with the subject &amp;quot;John&amp;quot; to get a sentence whose logical form is (115) unique(y,name(y,john), wish(y,Varsl ,q(some(x,porsche(x),have (y,x))))).</Paragraph>
      <Paragraph position="14"> Solving the &amp;quot;free_vars&amp;quot; subgoal will then give Varsl = \[y\], so tP~e final logical form is (116) unique(y,name(y,john), wish(y,\[y\],q(some(x,porsche(x),have(y,x))))). This means that there is a term T that represents John to hirn,;elf, and the sentence that John wishes to be true is (117) some(x,porsche(x),have(T,x)).</Paragraph>
      <Paragraph position="15"> This is a de dicto reading--there is not any particular Porsche that John wants.</Paragraph>
      <Paragraph position="16"> 'Fhe other solution for the &amp;quot;apply_quants&amp;quot; subgoal is  In this case, the logical form of the VP is (119) wish(V1,Varsl,q(have(V1,V2))) and its quantifier store is equal to QL2. Combining this VP with the subject &amp;quot;John&amp;quot; and applying the quantifiers gives a sentence whose logical form is (120) unique(y,name(y,john), some(x,porsche(x), wish(y,Varsl,q(have(y,x))))).</Paragraph>
      <Paragraph position="17"> Solving the &amp;quot;free_vars&amp;quot; subgoal gives Varsl = \[y,x\] so the final logical form is (121) unique(y,name(y,john), some(x,porsche(x), wish (y,\[y,x\],q(have(y,x))))).</Paragraph>
      <Paragraph position="18"> This means that there exist terms T1 and T2 such that T1 represents John to himself, T2 represents some Porsche to John, and the sentence John wishes to be true is (122) have(T1,T2) This is a de re reading, in which John wants some particular Porsche.</Paragraph>
      <Paragraph position="19"> The rules for verbs that take clauses as complements did not need to call &amp;quot;apply_quants,&amp;quot; because the rules that build the clauses will call &amp;quot;apply_quants&amp;quot; and so create the desired ambiguity. In Cooper's grammar, all NPs have the option of applying their quantifiers, and so there is no need for verbs like &amp;quot;want&amp;quot; to apply quantifiers--they can rely on the rule that built the verb's object, just as other intensional verbs do. This is a minor advantage of Cooper's grammar.</Paragraph>
    </Section>
  </Section>
  <Section position="14" start_page="0" end_page="0" type="metho">
    <SectionTitle>
4 IMPLEMENTATION AND CONCLUSIONS
4.1 IMPLEMENTATION
</SectionTitle>
    <Paragraph position="0"> The implementation uses the standard Prolog facility for parsing definite clause grammars. This facility translates the grammar into a top-down, left-to-right parser. This order of parsing leads to problems with the predicates &amp;quot;apply_quants&amp;quot; and &amp;quot;free_vars.&amp;quot; We cannot run &amp;quot;free_vars&amp;quot; until its first argument is a ground term---otherwise we might get an infinite number of solutions. In our exposition, we solved this problem by delaying the execution of &amp;quot;free_vars.&amp;quot; The standard DCG parser has no built-in facility for such delaying. As usual in such situations, there are two options: rewrite the predicates so that the existing interpreter works efficiently, or define a more general interpreter that allows the desired order of execution. The second approach is more desirable in the long run, because it achieves a central goal of logic programming: to use logical sentences that express our understanding of the problem in the clearest way. However, defining new interpreters is hard. The present implementation takes the low road--that is, the author rewrote the predicates so that the standard parser becomes efficient. In particular, the rule for top-level clauses calls a Prolog predicate that finds all de re argument lists in the final logical form and calls &amp;quot;free_vars&amp;quot; for each one.</Paragraph>
    <Paragraph position="1"> There is a similar problem about the predicate &amp;quot;apply_quants&amp;quot; in the rule for &amp;quot;want.&amp;quot; Since the parser works left to right, the quantifier from the object of &amp;quot;want&amp;quot; is not available when the logical form for the verb is being constructed. This means that the first argument of &amp;quot;apply_quants&amp;quot; is a free variable--so it has an infinite number of solutions. Here the implementation takes advantage of Prolog's &amp;quot;call&amp;quot; predicate, which allows us to delay the solution of a subgoal. The &amp;quot;apply_quants&amp;quot; subgoal is an extra feature of the verb &amp;quot;want&amp;quot; (in the case of an ordinary transitive verb, this feature is set to the empty list of goals).</Paragraph>
    <Paragraph position="2"> The rule for VPs with transitive verbs uses the &amp;quot;call&amp;quot; predicate to solve the subgoal--after the object of the verb has been parsed. At this point the first argument is properly instantiated and the call produces a finite set of solutions.</Paragraph>
    <Paragraph position="3"> The grammar given above contains the rule NP --, NP \[and\] NP, which is left recursive and cannot be parsed by the standard DCG parser. The implementation avoids this problem by adding a flag that indicates whether an NP is conjunctive. This gives the rule (123) NP(+conj) ~ NP(-conj) \[and\] NP(Conj), which is not left recursive--it assigns a right-branching structure to all conjunctions of NPs. These are the only differences between the grammar presented here and the Prolog code. The implementation was easy to write and modify, and it supports the claim that Prolog allows us to turn formal definitions into running programs with a minimum of effort.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML