File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/90/c90-2068_intro.xml
Size: 9,942 bytes
Last Modified: 2025-10-06 14:04:53
<?xml version="1.0" standalone="yes"?> <Paper uid="C90-2068"> <Title>FR EE ADJUNCTS NATURAL LANGUAGE INSTRUCTIONS*</Title> <Section position="2" start_page="0" end_page="0" type="intro"> <SectionTitle> 2 Instructions </SectionTitle> <Paragraph position="0"> Our view of instructions derives from a view of plans variously advocated in Pollack \[7, 8\], Suchman \[11\], and Agre and Chapman \[1\].</Paragraph> <Paragraph position="1"> Pollack contrasts two views of plan: plan as data structure and plan as mental phenomenon. (The former appears to be the same view of plans that Agre and Chapman have called plan as program.) Plans produced by Sacerdoti's NOAH system \[9\] are a clear example of this plan as data structure view. Given a goal to achieve (i.e., a partial state description), NOAH uses its knowledge of actions to create a data structure (a directed acyclic graph) whose nodes represent goals or actions and whose arcs represent temporal ordering, elaboration, or entailment relations between nodes. This data structure represents NOAH's plan to achieve the given goal.</Paragraph> <Paragraph position="2"> As Suchman points out \[11\], NOAH's original intent was to provide support for novice human agents in carrying out their tasks. Given a goal that an apprentice was tasked with achieving, NOAH was meant to form a plan and then use it to direct the apprentice in what to do next. To do this, it was meant to generate a Natural Language instruction corresponding to the action associated with the &quot;current&quot; node of the graph. If the apprentice indicated that he didn't understand the</Paragraph> <Paragraph position="4"> instruction or couldn't perform the prescribed action, NOAH was meant to &quot;move down&quot; the graph to direct the apprentice through the more basic actions whose performance would entail that of the original. The result is a sequence of instructions that corresponds directly to the sequence of nodes encountered on a particular graph traversal.</Paragraph> <Paragraph position="5"> Pollack contrasts the above with a plan as mental phenomenon view, in which having a plan to do some action/? corresponds roughly to * a constellation of beliefs about actions and their relationships; * beliefs that their performance, possibly in some constrained order, both entails the performance of /? and plays some role in its performance; * an intention on the part of the agent to act in ac- null cordance with those beliefs in order to perform/?. With respect to such beliefs, Pollack draws a three-way distinction between act-types, actions (or acts) and occurrences. Act-types are, intuitively, types of actions like playing a chord, playing a D-major chord, playing a chord on a guitar, etc. Act-types, as these examples show, can be more or less abstract. Actions can be thought of as triples of act-types, agents, and times (relative or absolute intervals) like Mark playing a D-major chord last Sunday afternoon on his Epiphone. Because it is useful to distinguish an action from its occurrence in order to talk about intentions to act that may never be realized, Pollack introduces a separate ontological type occurrence that corresponds to the realization of an action. (Pollack represents an occurrence as OCCUR(/?), where/? is an action. Thus an occurfence inherits its time from the associated time of its argument.) Agents can hold beliefs about entities of any of these three types: * act-types - An agent may believe that playing a D-major chord involves playing three notes (D,F~ and A) simultaneously, or that s/he does not know how to perform the act-type playing a D-major chord on a guitar, etc. Any or all of these beliefs can, of course, be wrong.</Paragraph> <Paragraph position="6"> * actions - An agent may believe that some action oe 1 must be performed before some other action a2 in order to do action /71 or that a2 must be performed before c~1 in order to do/?~. Here too, the agent's beliefs can be wrong. (It was to allow for such errors in beliefs and the Natural Language questions they could lead to that led Pollack to this Plan as Mental Phenomenon approach.) * occurrences- An agent may believe that what put the cat to sleep last Sunday afternoon was an overdose of catnip. S/he may also have misconceptions about what has happened.</Paragraph> <Paragraph position="7"> Therefore one can take the view that instructions are given to an agent in order that s/he develops appropriate beliefs, which s/he may then draw upon in attempting to &quot;do /?&quot;. Depending on the evolving circumstances, different beliefs may become salient. This appears to be involved in what Agre and Chapman \[1\] and what Suchman \[11\] mean by using plans as a resource. Beliefs are a resource an agent can draw upon in deciding what to do next.</Paragraph> <Paragraph position="8"> Given this view of plan as mental phenomenon, we can now consider possible relationships between instructions and behavior. At one extreme is a direct relationship, as in the game &quot;Simon Says&quot;, where each command (&quot;Simon says put your hands on your ears&quot;) is meant to evoke particular behavior on the part of the player. That is, Instruction =# Behavior The fact that such instructions are given in Natural Language is almost irrelevant. We have already demonstrated \[4\] that they can be used to drive animated simulations. Key frames from such a demonstration of two agents (John and Jane) at a control panel following instructions that begin John, look at switch twf-1.</Paragraph> <Paragraph position="9"> John, turn twf-1 to state 4.</Paragraph> <Paragraph position="10"> Jane, look at twf-3.</Paragraph> <Paragraph position="11"> Jane, look at tglJ-1.</Paragraph> <Paragraph position="12"> Jane, turn tglJ-1 on.</Paragraph> <Paragraph position="13"> are shown in Figure 1.</Paragraph> <Paragraph position="14"> In contrast, instructions can depart from this simple direct relation in many ways: 1. Multiple clauses may be involved in specifying the scope or manner of an intended action. For example, the intended culmination of an action may not be what is intrinsic to that action, but rather what is taken to be the start of the action prescribed next. 2 Consider the following instructions that Agre \[1\] gave to several friends for getting to the Washington Street Subway Station.</Paragraph> <Paragraph position="15"> Left out the door, down to the end of the street, cross straight over Essex then left up the hill, take the first right and it'll be on your left.</Paragraph> <Paragraph position="16"> While the action description &quot;\[go\] left up the hill&quot; has an intrinsic culmination (i.e., when the agent gets to the top of the hill), it is not the intended termination of the action in the context of these instructions. Its intended termination is the point at which the action of &quot;taking the first right&quot; commences - that is, when the agent recognizes that s/he has reached the first right. In Section 3, we will provide many more examples of this feature of instructions.</Paragraph> <Paragraph position="17"> 2. Instructions may describe a range of behavior appropriate under different circumstances. The agent is 2This is not the case in &quot;Simon Says&quot; type instructions, where each action description contains an intrinsic culmination \[6\]. 396 2 Figure h Control Panel Animation o,dy meant to do that which s/he recognizes the situation as demanding during its performance. For example, the following are part of instructions for installing a diverter spout: Diverter spout is provided with insert for 1/2&quot; pipe threads. If supply pipe is larger (3/4&quot;), unscrew insert and use spout without it.</Paragraph> <Paragraph position="18"> Here, the relevant situational features can be determined prior to installing the spout. In other cases, they may only be evident during performance. For example, the following are part of instructions for filling holes in plaster over wood lath: If a third coat is necessary, use prepared joint compound from a hardware store.</Paragraph> <Paragraph position="19"> Here, the agent will not know if a third coat is necessary until s/he sees whether the first two coats have produced a smooth level surface.</Paragraph> <Paragraph position="20"> 3. As in the plan as data structure model, instructions may delineate actions at several levels of detail or in several ways. For example, the following are part of instructions for filling holes in plaster where the lath has disintegrated as well as the plaster: Clear away loose plaster. Make a new lath backing with metal lath, hardware cloth, or, for small holes, screen. Cut the mesh in a rectangle or square larger than the hole. Thread a 4- to 5- inch length of heavy twine through the center of the mesh. Knot the ends together.</Paragraph> <Paragraph position="21"> Slip the new lath patch into the hole ...</Paragraph> <Paragraph position="22"> Here the second utterance prescribes an action at a gross level, with subsequent utterances specifying it in more detail.</Paragraph> <Paragraph position="23"> 4. Instructions may only provide circumstantial constraints on behavior but not specify when those circumstances will arise. For example, the following comes from instructions for installing wood paneling: When you have to cut a sheet \[of paneling\], try to produce as smooth an edge as possible. If you're using a handsaw, saw from the face side; if you're using a power saw, saw from the back side. Otherwise you'll produce ragged edges on the face because a handsaw cuts down and a power saw cuts up.</Paragraph> <Paragraph position="24"> Such cases as these illustrate an indirect relation between instructions and behavior through the intermediary of an agent's beliefs and evolving plan. That is,</Paragraph> </Section> class="xml-element"></Paper>