File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/83/a83-1012_metho.xml

Size: 20,046 bytes

Last Modified: 2025-10-06 14:11:29

<?xml version="1.0" standalone="yes"?>
<Paper uid="A83-1012">
  <Title>Hendrix, G. G., Sacerdoti, E. D., Sagalowicz, D., and Slocum, J., '*Developing a Natural Language Interface to Complex Data.&amp;quot; Association for Computing Machinery Transactions on Database</Title>
  <Section position="6" start_page="74" end_page="75" type="metho">
    <SectionTitle>
TREATED LIKE A WORD WITH MULTIPLE SENSES. The
</SectionTitle>
    <Paragraph position="0"> definitions of the words &amp;quot;fly&amp;quot;, &amp;quot;eat&amp;quot; and &amp;quot;A/C&amp;quot; are shown in Fig. 2.</Paragraph>
    <Paragraph position="1"> The definition of &amp;quot;A/C&amp;quot; states that it means AIRCRAFT or AIR-CONDITIONER. APE-If uses selectional restrictions to choose the proper sense of &amp;quot;A/C&amp;quot; in the question &amp;quot;What A/C can fly from Hahn?&amp;quot;. On the other hand, in the sentence &amp;quot;Send 4 A/C to BE70701.&amp;quot;, APE-II utilizes the facts that the OCA script is active, and that sending aircraft to a target is a scene of that script, Co determine that &amp;quot;A/C&amp;quot; means AIRCRAFT. In the question &amp;quot;What is an A/C?&amp;quot;, APE-II uses a weaker argument to resolve the potential ambiguity. It utilizes the fact that AIRCRAFT is an object that can perform a role in the OCA script, while an AIR-CONDITIONER cannot.</Paragraph>
    <Paragraph position="2"> The definition of &amp;quot;fly&amp;quot; states that it means FLY which is a kind of physical transfer. The expectations associated with fly state the actor of the sentence (i.e., a concept which precedes the action in a d~clarative sentence, follows &amp;quot;by&amp;quot; in a passive sentence, or appears in various places in questions, etc.) is expected to be an AIRCRAFT in which case it is the OBJECT of FLY or is expected to be a BIRD in which case it is both the ACTOR and the OBJECT of the physical transfer. This is the expectation which can select the intended sense of &amp;quot;A/C&amp;quot;. If the word &amp;quot;~o&amp;quot;  appears, it might serve the function of indicating the filler of the TO case of FLY. The word &amp;quot;from&amp;quot; is given a similar definition, which would fill the FROM case with the object of the preposition which :should be a PICTURE-PRODUCER but is preferred to be a LOCATION.</Paragraph>
    <Paragraph position="3"> The definition of &amp;quot;eat&amp;quot; contains an expectation with s selectional preference which indicates that the object is preferred to be food.</Paragraph>
    <Paragraph position="4"> This preference serves another purpose also. The object will be converted to a food if possible.</Paragraph>
    <Paragraph position="5"> For example, if the object were &amp;quot;chicken&amp;quot; then this conversion would assert that it is a dead and cooked chicken.</Paragraph>
    <Paragraph position="6"> We vili first discuss the parsing process as if sentences could be parsed in isolation and then explain how it is augmented to account for context.</Paragraph>
    <Paragraph position="7"> The simplified parsing process consists of adding the senses of each word to an active memory, considering the expectations, and removin E concepts (senses) which are not connected to other concepts.</Paragraph>
    <Paragraph position="8"> Word sense disambiguation and the resolution of pronominal references are achieved by several mechanisms. Selectional restrictions can be helpful to resolve m-biguities. For example, many actions require an animate actor. If there are several choices for the actor, the inanimate ones will be weeded out. Conversely, if there are several choices for the main action, and the actor has been established as animate, then ~hose actions which require an inanimate actor will be discarded.</Paragraph>
    <Paragraph position="9"> Selectional preferences are used in addition to selectioual restrictions. For example, if &amp;quot;eat&amp;quot; has an object which is a pronoun whose possible referents are a food and a coin, the food will be preferred and the coin discarded as a possible referent.</Paragraph>
    <Paragraph position="10"> A conflict resolution mechanism is invoked if more than one concept satisfies the restrictions and preferences. This consists of using &amp;quot;conceptual constraints&amp;quot; to determine if the CD structure which would be built is plausible. These constraints are predicates associated with CD primitives. For example, the locational specifier INSIDE has a constraint which states that the contents must be smaller than the container.</Paragraph>
    <Paragraph position="11"> The disnmbiguation process can make use of the knowledge structures which represent stereotypical domain information. The conflict resolution algorithm also determines if the CD structure which would be built refers to a scene in an active script and prefers to build this type of conceptualization. At the end of the parse, if there is an ambiguous nominal, the possibilities are matched against the roles of the active scripts. Nominals which can be a script role are preferred.</Paragraph>
    <Paragraph position="12"> A planned extension to the parsing algorithm consists of augmenting the definition of a word sense with information about whether it is an uncommonly used sense, and the contexts in which iC/ could be used (see \[Charniak, 1981\]). Only some senses will be added to the active memory and if  none of those concepts can be connected, other senses will be added. A similar mechanism can be used for potential pronoun referents, organizing concepts according to implicit or explicit focus in addition to their location in active or open focus spaces (see \[Grosz, 1977\]).</Paragraph>
    <Paragraph position="13"> Another extension to APE-II will be the incorporation of a mechanism similar to the named requests of APE. However, because the expectations of APE-II are in a declarative format, it is hoped that these requests can be generated from the causally linked scenes of the script.</Paragraph>
  </Section>
  <Section position="7" start_page="75" end_page="76" type="metho">
    <SectionTitle>
QUESTION ANSWERING
</SectionTitle>
    <Paragraph position="0"> After the meaning of a question has been represented, the question is answered by means of pattern-invoked rules. Typically, the pattern matching process binds variables to the major nominals in a question conceptualization. The referents of these nominals are used in executing a database query which finds the answer to the user's question. Although the question conceptualization and the answer could be used to generate a natural language response \[Goldman, 1975\], the current response facility merely substitutes the answer and referents in a canned response procedure associated with each question answering rule.</Paragraph>
    <Paragraph position="1"> The question answering rules are organized according to the context in which they are appropriate, i.e., the conversational script \[Lehnert, 1978\], and according to the primitive of the conceptualization and the &amp;quot;path to the focus&amp;quot; of the question. The path to the focus of a question is considered to be the path of conceptual cases which leads to the subconcept in question.</Paragraph>
    <Paragraph position="2"> A question answering production is displayed in Fig. 3. It is a default pattern designed to answer questions about which objects are at a location. This pattern is used to answer the question &amp;quot;~hat fighters do the airbasee in West Gerlmny have?&amp;quot;. In this example, the pattern variables &amp;LOC is bound to the meaning representation of &amp;quot;the airbases in West Germany&amp;quot; and &amp;OBJECT is bound to the meaning representation of &amp;quot;fighters&amp;quot;. The action is then executed and the referent of &amp;OBJECT is found to be (FIGHTER) and the referent of &amp;LOC is found to be (HAHN SEMBACH BITBURG). The fighters at each of these locations is found and the variable ANSWER is bound to the value of MAPPAIR:</Paragraph>
    <Paragraph position="4"> The response facet of the question answering production reformats the results of the action to merse locations with the same set of objects. The answer &amp;quot;There are none at Sembach. Hahn and Bitburg have F-4Cs and F-15s.&amp;quot; is printed on successive iteratione of PMAPC.</Paragraph>
    <Paragraph position="5"> The production in Fig. 3 is used to answer most questions about objects aC a location. It invokes a general function which finds the subset of ~he parts of a location which belong to a certain class. The OCA (offensive counter air) script used by the KNOBS system contains a more specific pattern for answering question about the defenses of a location. This production is used to answer the question &amp;quot;What SAMe are at BE70701?&amp;quot;. The action of this production executes a procedure which finds the subset of the surface to air missiles whose range is greater than the distance to the location.</Paragraph>
    <Paragraph position="6">  In addition to executing a database query, the action of a rule can racureively invoke other queJCion answering rules. For example, to answer the question '*Row many airbasaJ have F-At'e?&amp;quot;, a general rule converts the conceptualization of the question to that of '~hich airbaees have F-Atdege? &amp;quot; and counts the result of answering the larger. The question answering rules can also be used to find the referent of complex nominals such as &amp;quot;the airbases which have F-AC'e&amp;quot;. The path to the focus of the &amp;quot;question&amp;quot; is indicated by the conceptual case of the relative pronoun.</Paragraph>
  </Section>
  <Section position="8" start_page="76" end_page="77" type="metho">
    <SectionTitle>
INFERENCE
</SectionTitle>
    <Paragraph position="0"> when important roles are not filled in a concept, &amp;quot;conceptual completion&amp;quot; inferences are required to infer the fillers of conceptual cases.</Paragraph>
    <Paragraph position="1"> Our conceptual completion inferences are expressed as rules represented and organized in a manner analogous to question answering rules. The path to the focus of a conceptual completion inference ie the conceptual case which it is intended co explioate. Conceptual completion inferences are run only when necessary, i.e., when required by the pattern m4tcher to enable a question answering pattern (or even another inference pattern) to match successfully, An example conceptual completion inference is illustrated in FiE. 4. It is designed to infer the missing source of a physical transfer. The pattern binds the variable &amp;OBJECT co the filler of the OBJECT role and thq action executes a function which looks at the LOCATION case of &amp;OBJECT or checks the database for the known location of the referent of &amp;OBJECT. This inference would not be used in processin E the question &amp;quot;Which aircraft at Ramstein could reach the target from Hahn?&amp;quot; because the source has been explicitly stated. It would be used, on the other hand, in processing the question, &amp;quot;Which aircraft at Ramstein can reach the target?&amp;quot;. Its effect would be to fill the FROM slot of the question conceptualization with RAMSTEIN.</Paragraph>
  </Section>
  <Section position="9" start_page="77" end_page="77" type="metho">
    <SectionTitle>
(DEF-IHFERZNCE PAT (*PT~S* OBJECT &amp;OBJECT)
ACTION (F~MD-LOCATION &amp;OBJECT)
</SectionTitle>
    <Paragraph position="0"/>
    <Paragraph position="2"> If a question answering production cannot be found to respond to a question, and the question refers Co a scene in an active script, causal inferences are used CO find an answerable question vhich can be constructed as a state or action ~upliad by the original question. These inferences are represented by causal links \[CullinKford, 1978\] which connect the lCltel and actions of a stereotypical situation. The causal links used for this type of inference are RESULT (actions can result in state changes), ENABLE (states can enable action), and EESULT-ENA3LE (an action results in a state which enables an action). This last inference is so coumon that it is given a special link. In soma cases, the intermediate state is unimportant or unknown. In addition to causal links, temporal links are also represented to reason about the sequencing of actions.</Paragraph>
    <Paragraph position="3"> The causal inference process consists of locating a script paCtern of an active script which represents the scene of the script referred to by a question. The pattern matchfnE algorithm assures that the constants ~n the pattern are a super-class of the constants in the conceptual hierarchy of FRL frames. The variables in script patterns are the script roles which represent the common objects and actors of the script. The binding of script roles to subconcepts of a question conceptualization is subject to the recursive matching of patterns which indicate the common features of the roles. (This will be explained in more detail in the section on interactive script instantiation.) After the scene referenced by the user question is identified, a new question concept is constructed by substituting role bindings into patterus representing states or actions linked to the identified scene.</Paragraph>
    <Paragraph position="4"> Two script patterns from the OCA script are illustrated in Fig. 5. The script pattern named  AC-FLY-TO-TARCET matches the meaning of sentences which refer to the aircraft flying to the target from an airbase. It results in the aircraft being over the target which enables the aircraft to attack the target. The script pattern At-HIT-TARGET represents the propelling of a weapon toward the target. It results in the destruction of the target, and is followed by the aircraft flying back Co the airbase.</Paragraph>
    <Paragraph position="5"> The knowledge represented by these script patterns is needed to answer the question &amp;quot;What aircraft at Hahn can strike BE70701?&amp;quot;. The answer produced by KNOBS, &amp;quot;Y-15s can reach BE70701 from Hahn.&amp;quot;, requires a causal inference and a concept completion inference. The first step in producing this answer is to represent the meaning of the sentence. The conceptualization produced by APE-If is shown in Fig. 6a. A search for a question answering pattern to answer this fails, so causal inferences are tried. The question concept is identified Co he the AC-HIT-TARGET scene of the 0CA script, and the scene which RESULT-ENABLEs it, AC-FLY-TO-TARGET is instantiafied. This new question conceptualization is displayed in Fig 6b.</Paragraph>
    <Paragraph position="6"> A question answering pattern whose focus is (OBJECT IS-A) is found which could match the inferred question (Fig. 6c). To enable this pattern to match the inferred question, the FROM case must be inferred. This is accomplished by a concept completion inference which produces the complete conceptualization shown in Fig. 6d. Finally, the action and response of the question answering are executed to calculate and print ~n answer.</Paragraph>
  </Section>
  <Section position="10" start_page="77" end_page="77" type="metho">
    <SectionTitle>
INTERACTIVE SCRIPT INSTANTIATION
</SectionTitle>
    <Paragraph position="0"> The script patterns which describe the relationships among the scenes of a situation are also used by the KNOBS system to guide a conversation about that domain. The conversation with KNOBS in Fig. 7 illustrates the entering of plan components by interactively insCantiating script patterns.</Paragraph>
    <Paragraph position="1"> The first user sentence instantiaces two script patterns (the flying of aircraft, and the striking of a target) and binds the script roles:</Paragraph>
  </Section>
  <Section position="11" start_page="77" end_page="78" type="metho">
    <SectionTitle>
TARGET Co BE70501, WING to 109TFW, AIRCRAFT-NUMBER
</SectionTitle>
    <Paragraph position="0"> to 4, and TIME-OVER-TARGET to 0900. KNOB~ asks the user to select the AIRCRAFT. Because the user replied with a question whose answer is an aircraft, KNOBS asks if the user would like would like to use chat aircraft am a component of the developing plan. This is accomplished by a rule that is activated when KNOBS asks the user to specify a plan component. The interpretation of the user s negative answer is handled by s rule activated when KNOBS asks a yes-no question. KNOBS checks the consistency of the user's answer and explains a constrainc which has failed. Then, the user corrects this problem, and KNOBS processes the extra information supplied by matching the meaning of the user's input to a script pattern.</Paragraph>
    <Paragraph position="1">  Send 4 aircraft from the Logcfv co sc:iks SE7050L at 0900. Whac aircraft do you vane to use7 What alrcrafc are in the I09TI~T The I09TFW has F-4Cs.</Paragraph>
    <Paragraph position="2"> WouLd you Like to use F-4Cs for the aircraft? NO, F-4Gs.</Paragraph>
    <Paragraph position="3"> The 10~r~ does nOC co~tain F-4Gs.</Paragraph>
    <Paragraph position="4">  A script role can be bound by matching against patterns associated with other script roles in addition to matching against script patterns. Fig. 8 shows a role pattern associated with the script role AIRCL~YT. This pattern serves two purposes: to prevent bindings to the script role vhichwould not make sense (i.e., the object which plays the AIRCRAFT role ~st be an aircraft) and to recursively bind other script roles to attached concepts. In this exemple, the AIRBASE or the ~NC could be attached to the AIRCRAFT concept, e.g., &amp;quot;F-4Cs from Hahn&amp;quot; or &amp;quot;F-dCa in the 126TFW&amp;quot;. The interactive script interpreter is an alternative to the menu system provided by KNOBS for the entering of important components of a plan Co be checked for consistency. KNOBS also provides a means of automatically finishing the creation of a consistent plan. This can allow an experienced mission planner to enter a plan by typing one or two sentences and hitting a key which tells KNOBS co choose the unspecified components.</Paragraph>
  </Section>
  <Section position="12" start_page="78" end_page="78" type="metho">
    <SectionTitle>
TRANSFERRING DOMAINS
</SectionTitle>
    <Paragraph position="0"> To demonstrate their domain independence, the KNOBS System and APE-II have been provided with knowledge bases to plan and answer questions about naval &amp;quot;show of flag&amp;quot; missions. This version of KNOBS also uses FRL as a database language.</Paragraph>
    <Paragraph position="1"> A large portion of the question answering capability was directly applicable for a number of reasons. First of all, dictionary entries for frames are constructed automatically when they appear in a user query. The definitions of the attributes (slots) of a frame which are represented as RELATIONs are also constructed when needed. The definitions of many common words such as &amp;quot;be&amp;quot;, &amp;quot;have&amp;quot;, &amp;quot;a&amp;quot;, &amp;quot;of&amp;quot;, etc., would be useful in understanding questions in any domain. The question answering productions and concept completion inferences are separated into default and domain specific categories. Many of the simple but common queries are handled by default patterns.</Paragraph>
    <Paragraph position="2"> For example, &amp;quot;Which airbases have fighters?&amp;quot; and &amp;quot;What ports have cruisers?&amp;quot; are answered by the same default pattern. Currently, the Navy version of KNOBS has 3 domain specific question answering patterns, compared to 22 in the Air Force version.</Paragraph>
    <Paragraph position="3"> (There are 46 default patterns.) The most important knowledge structure missing in the Navy domain is the scripts which are needed to perform causal inferences and dialog directed planning.</Paragraph>
    <Paragraph position="4"> Therefore, the system can answer the question &amp;quot;What weapons does the Nimitz have?&amp;quot;, but can't answer '~ihat weapons does the NimiCz carry?&amp;quot;.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML