File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/86/h86-1017_metho.xml

Size: 22,698 bytes

Last Modified: 2025-10-06 14:11:54

<?xml version="1.0" standalone="yes"?>
<Paper uid="H86-1017">
  <Title>LIVING UP TO EXPECTATIONS: COMPUTING EXPERT RESPONSES'</Title>
  <Section position="3" start_page="0" end_page="185" type="metho">
    <SectionTitle>
1. Introduction\]
</SectionTitle>
    <Paragraph position="0"> In cooperative man-machine interaction, it is necessary but not sufficient for a system to respond truthfully and informatively to a user's question. In particular, if the system has reason to believe that its planned response might mislead the user to draw a false conclusion, then it must block that conclusion by modifying or adding to its response.</Paragraph>
    <Paragraph position="1"> Such cooperative behavior was investigated in \[5\], in which a modification of Grice% Mazim of Quality - &amp;quot;Be truthful&amp;quot; - is proposed: If you, the speaker, plan to say anything which may imply for the hearer something that you believe to be false, then provide further information to block it.</Paragraph>
    <Paragraph position="2"> This behavior was studied in the context of interpreting certain definite noun phrases. In this paper, we investigate this revised principle as applied to responding to users' plan-related questions. Our overall aim  is to: 1. characterize tractable cases in which the system as respondent (R) can anticipate the possibility of the user/questioner (Q) drawing false conclusions from its response and hence alter it so as to prevent this happening; 2. develop a formal method for computing the projected inferences that Q may draw from a  particular response, identifying those factors whose presence or absence catalyzes the inferences; 3. enable the .system to generate modifications of its response that can defuse possible false inferences and that may provide additional useful information as well.</Paragraph>
    <Paragraph position="3"> In responding to any question, including those related to plans, a respondent (R) must conform to Grice's first Maxim of Quantitlt as well as the revised Maxim of Quality stated above: Make your contribution as informative as is required (for the current purposes of the exchange).</Paragraph>
    <Paragraph position="4"> At best, if R's response is not so informative, it may be seen as uncooperative. At worst, it may end up violating the revised Maxim of Quality, causing Q to conclude something R either believes to be false or does not know to be true: the consequences could be dreadful. Our task is to characterize more precisely what this expected informativeness consists of. la question answering, there seem to be several quite different types of information, over and beyond the simple answer to a question, that are nevertheless expected. For example, 1. When a task-related question is posed to an expert {R), R is expected to provide additional information that he recognizes as necessary to the performance of the task, of which the qt:estioner (Q) may be unaware. Such response behavior was discussed and implemented by Alien \[1\] in a system to simulate a train information booth attendant responding to requests for schedule and track information. In this case, not providing the expected additional information is simply uncooperative: Q won't conclude the train doesn't depart at any time if ?~ fails to volunteer one.</Paragraph>
    <Paragraph position="5"> : ~=~ i~:~ respect to discussions ald/or arguments, a speaker contradicting another is expected to supp~'~ his contrary contention. Again, failing to provide support would simply be viewed as u.~c~,o~erative \[2, 3\].</Paragraph>
    <Paragraph position="6"> . With respect to an expert's responses to questions, if Q expects that R would inform him of P !f P were true, then Q may interpre t R's silence regarding P as implying P is not true. s Thus if I:~ k.qows P to be true, his silence may lead to Q's being misled. This third type of expected informativeness is the basis for the potentially misleading responses that we are trying to avoid a.~d that constitute the subject of this paper.</Paragraph>
    <Paragraph position="7"> What is of interest to us is characterizing the Ps that Q would expect an expert R to inform him of, if they hold. Notice that these Ps differ from script-based expectations \[8\], which are based on what is taken to be the ordinary course of events in a situation. In describing such a situation, if the speaker d,,esn't explicitly reference some element P of the script, the listener simply assumes it is true. On the other hand, the Ps of interest here are based on normal cooperative discourse behavior, as set out in Grice's maxims. If the speaker doesn't make explicit some information P that the listener believes he would possess and inform the listener of, the listener assumes it is false.</Paragraph>
    <Paragraph position="8"> In this paper, we attempt to give a formal account of a subclass of Ps that should be included (in addition to the simple answer) in response to questions involving Q's achieving some goal 4 - e.g., * Can I  Related work \[4\] discusses providing indirect or modified responsel to yes/no questions where a direct response, while truthful, might mislead Q.</Paragraph>
    <Paragraph position="9">  drop CIS5777&amp;quot;, &amp;quot;I want to euroi in CIS5777&amp;quot;, 'How do I get to/ViarGh Creek on the Exl~ressway?', etc., lest that rzsponse otherwise mislead Q. In this endeavor, our first step is to specify that knowledge that an expert R must have in order to identify the Ps that Q would expect to be informed of, in response to his question. Our second step is to formalize that knowledge and show how the system can use it. Our third step is to show how the system can modify its planned response so as to convey those Ps. In this paper, Section 2 addresses the first step of this process and Sections 3 and 4 address the second. The third step we mention here only in passing.</Paragraph>
    <Paragraph position="10"> 2. Factors in Computing Likely Informing Behavior\] Before discussing the factors involved in computing this desired system behavior, we want to call attention to the distinction we are drawing between actions and events, and between the stated goal of a question and its intended goal. We limit the term action to things that Q has some control over. Things beyond Q's control we will call events, even if performed by other agents. While events may be likel3/or even necessary,, Q and R nevertheless can do nothing more than wait for ~hem to happen. This distinction between ~ctions and events shows up in R's response behavior: if an action is needed, R can suggest that Q perform it. If an event is, R can do no more than inform Q.</Paragraph>
    <Paragraph position="11"> Our second distinction is between the stated goal or &amp;quot;S-goal = of a request and its intended goal or =I-goal =. The former is the goal most directly associated with Q's request, beyond that Q know the information. That is, we take the S-goal of a request to be the goal directly achieved by using the information.</Paragraph>
    <Paragraph position="12"> Underlying the stated goal of a request though may be another goal that the speaker wants to achieve. This intended goal or degl-go3!&amp;quot; may be related to the S-goal of the request in any of a number of ways: * The l-goal may be the same as the S-goal.</Paragraph>
    <Paragraph position="13"> * The l-goal may be more abstract than the S-goal, which addresses only part of the I-goal. (This is the standard goal/sub-goal relation found in hierarchical planning \[14\].) For example, Q's S-goal may be to delete some files (e.g., *How can I delete all but the last version of FOO.MSS?deg), while his l-goal may be to bring his file usage under quota. This more abstract goal may also involve archiving some other files, moving some into another person's directory, etc.</Paragraph>
    <Paragraph position="14"> * The S-goai may be an enablinK condition for the I-goal. For example, Q's S-goal may be to get read/write access to a file, while his I-goal may be to alter it.</Paragraph>
    <Paragraph position="15"> The l-goal may be more ~eneral than the S-goal. For example, Q's S-goal may be to know how to repeat a control-N, while his l-goal may be to know how to effect multiple sequential instances of a control character.</Paragraph>
    <Paragraph position="16"> Conversely, the l-goal may be more specific than the S-goal - for example, Q's S-goal may be to know how to send files to someone on another machine, while his I-goal is just to send a particular file to a local network user, which may allow for a specialized procedure.</Paragraph>
    <Paragraph position="17"> Inferring the l-goal corresponding to an S-goal is an active area.of research \[1, Carberry83, 10, 11\]. We assume for the purposes of this paper that R can successfuUy do so. One problem is that the relationship that Q believes to hold between his S-goal and his I-goal may not actually hold: for example, the S-goal  may not fulfill part of the bgoal, or it may not instantiate it, or it may not be a pre-condition for it. In fact, the S-goal may not even be possible to effect! This failure, under the rubric &amp;quot;relaxing the appropriate-query assumption', is discussed in more detail in \[10, nl. It is also reason for augmenting R's response with appropriate Ps, as we note informally in this section and more formally in the next. Having drawn these distinctions, we now claim that in order for the system to compute both a direct  answer to Q's request and such Ps as he would expect to be informed of, were they true, the system must be able to draw upon knowledge/beliefs about * the events or actions, if any, that can bring about a goal * their enabling conditions * the likelihood of an event occuring or the enabling conditions for an action holding, with respect to a state * ways of evaluating methods of achieving goals - for example, with respect to simplicity, other consequences (side effects), likelihood of success, etc.</Paragraph>
    <Paragraph position="18"> * general characteristics of cooperative expert behavior  The roles played by these different types of knowledge (as well as specific examples of them) are well illustrated in the next section.</Paragraph>
    <Paragraph position="19"> 3. Formalizing Knowledge for Expert Response In this section we give examples of how a formal model of user beliefs about cooperative expert behavior can be used to avoid misleading responses to task-related questions - in particular, what is a very representative set of questions, those of the form =How do I do X? =. Although we use logic for the model because it is clear and precise, we are not proposing theorem proving as the means of computing cooperative behavior. In Section 4 we suggest a computational mechanism. The examples are from a domain of advising students and involve responding to the request degI want to drop CIS577&amp;quot;. The set of individuals includes not only change states, we represent corresponding to events or convenient:</Paragraph>
    <Paragraph position="21"> students, instructors, courses, etc. but also states. Since events and actions them as (possibly parameterized) functions from states to states. All terms actions will be underlined. For these examples, the following notation is the user the expert the current state of the student R believes proposition P R believes that Q believes P event/action e can apply in state S a is a likely event/action in state S P, a proposition, is true in S x wants P to be true To encode the preconditions and consequences of performing 2n action, we adopt an axiomatization of STRIPS operators due to \[Chester83, 7, 15\]. The preconditions on an action being applicable are encoded using &amp;quot;holds&amp;quot; and &amp;quot;admissible&amp;quot; (essentially defining &amp;quot;admissible'). Namely, if cl ..... ca are precondltions on an action a,</Paragraph>
    <Paragraph position="23"> a's immediate consequences pl ..... pm can be ~tated as admissible(a(s)) =, holds(pl, a(s)) a ... &amp; holds(pm, a(s)) A frame axiom states that only pl ..... pm have changed.</Paragraph>
    <Paragraph position="25"> In particular, we can state the preconditions and consequences of dropping CIS577. (h acd n are variables, while C stands for CIS577.) RB(holds(enrolled(h, C, fall), n) &amp; holds(date(n)&lt;Novl6, n) admissible( drop (h, CX . ) ) ) RB( admissible( drop(h, CX n ) ) =~ holds(-~enrolled( h,C,fall),drop(h, CX n ) ) ) RBl-(p=enrolled(h,C,fall)) admissible(drop(h,C)(n)) holds(p,.) holds(p,drop(h,C)(n))) Of course, this only partially solves the frame problem, since there will be implications of pl ..... pm in general. For instance, it is likely that one might have an axiom stating that one receives a grade in a course only if the individual is enrolled in the course.</Paragraph>
    <Paragraph position="27"> What we claim is: (1) R must give a truthful response addressing at least Q's S-goal; (2) in addition, R may have to provide information in order not to mislead Q; and (3) R may give additional information to be cooperative in other ways. In the subsections below, we enumerate the cases that R must check in effecting (2). In each case, we give both a formal representation of the additional information to be conveyed and a possible English gloss. In that gloss, the part addressing Q's S-goal wiil appear in normal type, while the additional information will be underlined.</Paragraph>
    <Paragraph position="28"> For each case, we give two formulae: a statement of R's beliefs about the current situation and an axiom stating R's beliefs about Q's expectations. Formulae of the first type have the form RB(P).</Paragraph>
    <Paragraph position="29"> Formulae of the second type relate such beliefs to performing an informing action. They involve a statement of the form ~lPl =~ likely(i, Se), where i is an informing act. For example, if R believes there is a better way to achieve Q's goal, R is likely to inform Q of that better way. Since it is assumed that Q has this belief, we have QB( RB\[P\] = likely(i, Sc)).</Paragraph>
    <Paragraph position="30"> Sit will also be the ease that RBQB(admlssible(drop(Q,C~SC/))) if Q's asks &amp;quot;How can ! drop CIS5777&amp;quot;, but not if he asks &amp;quot;Can i drop CIS577f'. in the latter era, Q must of course believe that it may be admissible, or why ask the question. !a either ease, R's subsequent behavior dot~a't seem contingent on hil beliefs ab'~'~ beliefs about admissibility.  where we can equate oQ believes i is likely&amp;quot; with &amp;quot;Q expects i.&amp;quot; Since R has no direct access to Q's beliefs, this must be embedded in R's model of Q's belief space. Therefore, the axioms have the form (modulo quantifier placement) RBQB( RB\[P l =, likely{i, So) ).</Paragraph>
    <Paragraph position="31"> An informing act is meant to serve as a command to a natural language generator which selects appropriate iexical items, phrasing, etc. for a natural language utterance. Such an act has the form inform-that(R,Q,P) R informs Q that P istrue.</Paragraph>
    <Paragraph position="32"> 3.1. Failure of enabling eondli~lona Suppose that it is past the November 15th deadline or that the official records don't show Q enrolled in CIS577. Then the enabling conditions for dropping it are not met. That is, R believes Q's S-goal cannot be achieved from So.</Paragraph>
    <Paragraph position="34"> Thus R initially plans to answer &amp;quot;You can't drop CIS577&amp;quot;. Beyond this, there are two possibilities.</Paragraph>
    <Paragraph position="35"> 3.1.1. A way If R knows another action b that would achieve Q's goals (cf. formula \[2\]), Q would expect to be informed about it. If not so informed, Q may mistakenly conclude that there is no other way. Formula \[3\] states this belief that R has about Q's expectations.</Paragraph>
    <Paragraph position="37"> R's full response is therefore &amp;quot;You can't drop 577; you can b.&amp;quot; For instance, b could be changing status to auditor, which may be performed until December I.</Paragraph>
    <Paragraph position="38"> 3.1.2. No way If R doesn't know of any action or event that could achieve Q's goal (cf. \[4\]), Q would expect to be so informed. Formula \[5\] states this belief about Q's expectations.</Paragraph>
    <Paragraph position="40"> To say only that Q cannot drop the course does not exhibit expert cooperative behavior, since Q would be uncertain as to whether R had considered other alternatives. Therefore, R's full response is &amp;quot;You can't drop 577; there isn ~ anything you can do to prevent failing.= Notice that R's analysis of the situation may turn up additional information which a cooperative expert  could provide that does not involve avoiding misleading Q. For instance, R could indicate enabling conditions that prevent there being a solution: suppose the request to drop the course is made after the November 15th deadline. Then R would believe the following, in addition to \[1\]</Paragraph>
    <Paragraph position="42"> &amp; (-~tholds(Pi, S), for some Pi above)\] =~ iik ely ( in f orm-t hat (R, Q,-,hol ds(Pi, S )),S ) ) In this ease the response should be &amp;quot;'You can't drop 577; Pi isn~ true.&amp;quot; Alternatively, the language generator might paraphrase the whole response as, &amp;quot;if Pi were true, you could drop.&amp;quot; Of course there are potentially many ways to try to achieve a goal: by a single action, by a single event, or by an event and an action .... In fact, the search for a sequence of events or actions that would achieve the goal may consider many alternatives. If all fail, it is far from obvious which blocked condition to notify Q of, and knowledge is needed to guide the choice. Some heuristics for dealing with that problem ~ .. given in \[12\].</Paragraph>
    <Paragraph position="43"> 3.2. An nonproductive act Suppose the proposed action does not achieve Q's l-goal, cL \[6\]. For example, dropping the course may still mean that failing status would be recorded as a WF (withdrawal while failing). R may initially plan to answer &amp;quot;You can drop 577 by ...'. However, Q would expect to be told that his proposed action does not achieve his l-goal. Formula \[7\] states R's belief about this expectation.</Paragraph>
    <Paragraph position="45"> I~ admissible( drop (Q, C\]( Sc ) )\] likely( in f orm-t h at (Fl, Q , -hold~(-/ail(Q,C),drop(Q,C)(Sc))),Sc)) R's full response is, &amp;quot;You can drop 577 by .... However, you will still fail.&amp;quot; Furthermore, given the reasoning in section 3.1.1 above, R's full response would also inform Q if there is an action b that the user can take instead.</Paragraph>
    <Paragraph position="46"> 3.3. A better way Suppose R believes that there is a better way to achieve Q's 1-goal, cf. \[8\] - for example, taking an incomplete to have additional time to perform the work, and thereby not losing all the effort Q has already expended. Q would expect that R, as a cooperative expert, would inform him of such a better way, ef. \[9 I. If R doesn't, R risks misleading Q that there isn't one.</Paragraph>
    <Paragraph position="48"> R's direct response is to indicate how f can be done. R's full response includes, in addition, &amp;quot;b is a better way. ~ Notice that if R doesn't explicitly tell Q that he is presenting a better way (i.e., he just presents the method), Q may be misled that the response addresses his S-goal: i.e., he may falsely conclude that he is being told how to drop the course. (The possibility shows up clearer in other examples - e.g., if R omits the first sentence of the response below Q: How do I get to Marsh Creek on the Expressway? R: It's faster and shorter to take Route 30. Go out Lancaster Ave until ....</Paragraph>
    <Paragraph position="49"> Thus even when adhering to expert response behavior in terms of addressing an I-goal, we must keep the system aware of potentially misleading aspects of its modified response as well. Note that R may believe that Q expects to be told the best way. This would change the second axiom to include within the scope of the existential quantifier (Va){-,(a=b) =~ \[holds(-,fail(Q,C), a(Sc)) ,.% admissible(a(Sc)) &amp; better(b,a)\]} 3.4. The only way Suppose there is nothing inconsistent about what the user has proposed - i.e., all preconditions are met and it will achieve the user's goal. R's direct response would simply be to tell Q how. However, if R notices that that is the only way to achieve the goal (of. \[10\]), it could optionally notify Q of that, el. \[111.</Paragraph>
    <Paragraph position="51"> R's full response is &amp;quot;You can drop 577 by .... That is the only way to prevent failing.&amp;quot;</Paragraph>
    <Section position="1" start_page="185" end_page="185" type="sub_section">
      <SectionTitle>
3.5. Something Turning Up
</SectionTitle>
      <Paragraph position="0"> Suppose there is no appropriate action that Q can take to achieve his I-goal. That is, RB( ~(3 a)\[admissible(a(Se)) &amp; holds(g, a.\[Sc))\]) There may still be some event e out of Q's control that could bring about the intended goal. This gives several more cases of R's modifying his response.</Paragraph>
      <Paragraph position="1">  If event e brings about a state in which the enabling conditions of an effective action a are true, cf. \[15\]</Paragraph>
      <Paragraph position="3"> then the same principles about informing q of the likelihood or unlikelihood of e apply as they did before.</Paragraph>
      <Paragraph position="4"> In addition, R must inform Q of a, cf. \[16\]. Thus R's full response would be &amp;quot;You can't drop 577. If e were to occur, which is (un)likely, you could a and thus not fail 577.&amp;quot;</Paragraph>
    </Section>
  </Section>
  <Section position="4" start_page="185" end_page="185" type="metho">
    <SectionTitle>
4. Reasoning
</SectionTitle>
    <Paragraph position="0"> Our intent in using logic has been to have a precise representation language whose syntax informs R's reasoning about Q's beliefs. Having computed a full response that conforms to all these expectations, R may go on to 'trim' it according to principles of brevity that we do not discuss here.</Paragraph>
    <Paragraph position="1"> Our proposal is that the informing behavior is &amp;quot;pre-compiled'. That is, R does not reason explicitly about Q's expectations, but rather has compiled the conditions into a case analysis similar to a discrimination net. For instance, we can represent informally several of the cases in section 3.</Paragraph>
    <Paragraph position="2">  Note that we are assuming that R assumes the most demanding expectations by Q. Therefore, R can reason solely within its own space without missing things.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML