File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/abstr/88/c88-1003_abstr.xml

Size: 11,070 bytes

Last Modified: 2025-10-06 13:46:29

<?xml version="1.0" standalone="yes"?>
<Paper uid="C88-1003">
  <Title>Functional Constraints in Knowledge-Based Natural Language Understanding</Title>
  <Section position="1" start_page="0" end_page="14" type="abstr">
    <SectionTitle>
Abstract
</SectionTitle>
    <Paragraph position="0"> Many knowledge-based systems of semantic interpretation rely explicitly or implicitly on an assumption of structural isomorphy between syntaotic and semantic objects, handling exceptions by ad hoc measures. In this paper I argue that constraint equations of the kind used in the LFG- (or PATR-)formalisms provide a more general, and yet restricted formalism :in which not only isomorphic correspondences are expressible, but also many cases of non-isomorphic correspondences. I illustrate with treatments of idioms, speech act interpretation and discourse pragmatics.</Paragraph>
    <Paragraph position="1"> 1. Background and purpose In knowledge-based natural language understanding systems the role of syntax is by no means self-evident. In the Yalean tradition /,~tchank &amp; Riesbeck, 1981/ syntax has only played a minor role and whatever little syntactic information there is has been expressed in simple terms. Consequently, there is no grammar as such and syntactic conditions are freely intermixed with semantic conditions in the requests that drive the system forward/13irnbaum &amp; Selfridge, 1981/. Similarly, in frame-based systems such as /Hayes, 1984/ the syntactic information is stated in ~:onjunction with all other information relevant for instances ot a frame. A justification for this approach, apart from transparency, is that it makes sense to say that part of our knowledge of a concept is knowledge about how it is communicated. null A major disadvantage of this approach is of course its lack of generality. To overcome this problem we may extract general syntactic knowledge' and make use of it in a syntactic parser which works alongside with the semantic analyser. Examples of such systems are PSI-KLONE /Bobrow &amp; Webber, 1980; Sondheimer et ah, 1984/ and MOPTRANS /Lytinen, 1986; 1987/. The promise of these systems is that you get both modularity and integration, although there are many open questions about how the integration can best be achieved.</Paragraph>
    <Paragraph position="2"> Moreover, one would I!ke to put the integration of syntax and semantics, not just syntax and semantics per se, on a principled basis, i.e. we need a theory of how syntactic and semantic objects correspond. Linguistics and philosophy offer some guidelines here, such as compoeitionaiity, and a number of different theories, but a problem is that the semantic objects considered are usually not knowledge structures. /Hirst, 1987/, though, is an attempt at a principled, modular and integrated knowledge-based system where compositionality and a principle of strong typing provide the theoretical underpinnings. These principles teem to provide a tighter straight-jacket than one would really want, however, as indicated by the many structures that Hirst shows are problematic for his system.</Paragraph>
    <Paragraph position="3"> Another~ more recent approach is to capture correspondences between syntactic and semantic objects through constraints /Halvorsen~ 1983; 1987; Fenstad etal., 1985; Kaplan, 1987/. An essential feature of constraints is that they simultaneously characterize properties of a structural level and account for a correspondence between those properties and properties of another level, i.e. the level to which the constraint is attached. The correspondence may be between two different levels of syntactic structure, as in LFG, or between a syntactic structure and a semantic structure or conceivably between any two structural aspects that constrain each other. So far it seems that constraints have primarily been stated in the direction from form to meaning, where meaning has been regarded as inherent in linguistic expressions and thus derivable from an expression, given a grammar and a lexicon.</Paragraph>
    <Paragraph position="4"> In a working system, however, we are not merely interested in a decontextualised meaning of an expression, but in the content communicated in an utterance of an expression, which, as we know, depend on world knowledge and context in more or less subtle ways. A rather trivial fact is that we need to have an understanding of the context in order to find a referent for a referring expression. A more interesting fact is that we often need an understanding of context in order to get at the information which is relevant for determining the referent /Moore, 1981; Ahrenberg a 1987a,b; Pulman, 1987/.</Paragraph>
    <Paragraph position="5"> In a knowledge-based system, the knowledge-base provides an encoding of general world knowledge as well as a basis for keeping track of focal information in discourse. It seems a natural move to combine a knowledge-based semantics with the descriptive elegance and power of constraints, but as far as I know, not much work has been done in this area. /Tomita &amp; Carbonell, 1986/presents a knowledge-based machine-translation system based on functional grammar and entity-oriented parsing.</Paragraph>
    <Paragraph position="6"> In this paper I discuss the role of syntax in three general and related aspects of utterance interpretation: referent determination, classification, and role identification. A joint solution to these problems will fall out if we assume, as is often done, a simple, one-to-one structural (or categorial) correspondence between syntactic and semantic objects. This is done explicitly e.g. by /Danleli et ah, 1987/ and /Hirst, 1987/ and, so far as I can judge, implicitly in many other systems.</Paragraph>
    <Paragraph position="7"> However, the assumption is much too simplified and must be amended. I will illustrate some cases where the correspondences are more involved and argue that local constraints of the kind used in the LFG-formaiism /Kaplan &amp; Bresnan, 1982/ are able to handle them in a fairly straight-forward way. Thus, instead of ad hoc-solutions the isomorphic cases will in this framework fall out as particularly simple instances of the general principles.</Paragraph>
    <Paragraph position="8"> 2. A framework and a system I regard the process of interpretation as a process in which a given object, the utterance, is assigned a description, the analyMs. The description has different aspects, primary among them being  - a constituent structure, (c-structure) . a functional structure, (f-structure), a semantic structure, (d-structure) and * a content structure.</Paragraph>
    <Paragraph position="9"> I refer to these structural levels as aspects in order to emphasize the idea that they are all part of one and the same interpretation of the utterance. The c-structure and the f-structure are roughly as in LFG /Kaplan &amp; Bresnan, 1982/, but with some important deviations. The functional structure is strictly syntactic. There are no semantic forms and hence no grammatical notions of coherence and completeness. Instead of the PRED-attribute, there is an attribute LEX whose value is a &amp;quot;lexeme&amp;quot;, an abstract grammatical unit which in turn is associated with semantic objects: object types, semantic attributes, and so on.</Paragraph>
    <Paragraph position="10"> The semantic structure is a descriptor structure ('dag') just as the functional structure, but with descriptors pertaining to the discourse referents accessed or made available by the utterance. Thus, a constituent of the semantic structure consists of a description that potentially applies to an object in the universe of discourse. The content structure differs from the semantic structure mainly in that referents for descriptions have been identified (where possible).</Paragraph>
    <Paragraph position="11"> If a c-structure, an f-structure and a d-structure apply to an expression under a given interpretation they are said to correspond. If, similarly, a sub-expression of the input is associated with constituents at all three levels, these constituents are said to correspond.</Paragraph>
    <Paragraph position="12"> Correspondences between c-structure and f-structure are defined by an LFG-style grammar and a dictionary of stems and ~. affixes. Correspondences between f-structure and d-structure are defined by the lexeme dictionary and information in the knowledge-base. Primary among the knowledge structures are types, attributes, and instances. Every type is associated with a prototype, a frame-like structure whlch'defines what attributes apply to instances of that type, as well as restrictions on their values. Prototypes are also associated with functional constraints, thus defining possible correspondences between d-structures and f-structures. For example, the attribute AGENT, beside other restrictions on its occurrence and values, may be assigned the canonical constraint (~ SUB J) = ~. The arrows in this schema have the same interpretation as in the grammar rules: i&amp;quot; points to the f-structure node corresponding to the description of which the attribute is part, ~ points to the f-structure node corresponding to its value.</Paragraph>
    <Paragraph position="13"> Semantic attributes may also be associated with contextual constraints. The context is represented by a special object, the discourse state (DS), the description of which encodes the contextual information that the system currently has available. In particular, this will include information about who is speaker and who is addressee. A simple contextual constraint can be stated as =(DS SPEAKER), which when associated with an attribute asserts, the identity between its value and the current speaker.</Paragraph>
    <Paragraph position="14"> The relations between different structural aspects and the knowledge sources that define and constrain them are illustrated in figure 1.</Paragraph>
    <Paragraph position="15"> In the process of interpretation the analysis is ideally constructed incrementally. When information is added to one structural aspect and there is a constraint associated with this information, we are justified in adding the information stated in the constraint to the structural aspect to which it relates. If this is not possible, e.g. due to the presence of contradicting information, the initial information can be rejected.</Paragraph>
    <Paragraph position="16">  The ideas presented here have been partially implemented in a system called FALIN, which is developed as a precursor to an understanding component of a dialog system (Ahrenberg, 1987a). 1 The world of FALIN is a drawing-pad and its knowledge cover simple geometrical objects that the user can draw and manipulate using natural language. FALIN's parser (a chart parser) is constructing a c-~tructure and an f-structure in tandem, hut hands f-structures over to another module which attempts to construct corresponding d-structures and content structures. The content structure is then evaluated against the knowledge-base.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML