File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/92/c92-3136_metho.xml

Size: 21,601 bytes

Last Modified: 2025-10-06 14:13:00

<?xml version="1.0" standalone="yes"?>
<Paper uid="C92-3136">
  <Title>ARGUING ABOUT PLANNING ALTERNATIVES</Title>
  <Section position="3" start_page="0" end_page="0" type="metho">
    <SectionTitle>
1 Introduction
</SectionTitle>
    <Paragraph position="0"> In discourse processing, two major problems are understanding the underlying connections between successive dialog responses and deciding on the content of a coherent dialog response. This paper presents an initial model that accomplinhes these tasks for one class of argumentative dialogs. In this class, each dialog respouse presents a belief that justifies or contradicts a belief provided earlier in the dialog.</Paragraph>
    <Paragraph position="1"> The following dialog fragment is an example:  (1) TIDY: The members of the AI lab should clean it themselves.</Paragraph>
    <Paragraph position="2"> (2) ScguPPY: But that interferes with doing research. null (3) TIDY: There's no other way to keep it clean. (4) SCRUFf'Y: We can pay a janitor to keep it clean.</Paragraph>
    <Paragraph position="3"> (5) TIDY: We need money to pay a janitor.</Paragraph>
    <Paragraph position="4"> (6) SCRUFFY: We can transfer the money from the salary fund.</Paragraph>
    <Paragraph position="5"> (7) TIDY: But doing that interferes with paying the lab members.</Paragraph>
    <Paragraph position="6"> (8) SCRUVFY: It's more desirable to have a clean  lab than to pay the lab members.</Paragraph>
    <Paragraph position="7"> Each response states one or more plan-oriented beliefs, usually as part of a short chain of reeanning justifying or contradicting a belief provided earlier in the dialog.</Paragraph>
    <Paragraph position="8"> In (1), TIDY begins by stating a belief: the lab members should execute the plan of cleaning the lab. In (2), SCRUFFY responds with a belief that the lab members executing this plan interferes with their doing research. This belief justifies SCRUFFY~s unstated belief that the lab members should not execute the plan of cleaning the lab, which contradicts TIvY's stated belief in (1). SCRUFPY's underlying reasoning is that the lab members shouldn't clean the lab because it interferes with their executing the more desirable plan of doing research.</Paragraph>
    <Paragraph position="9"> In (3), TIDY presents s belief that there's no alternative plan for keeping the lab clean. This belief justifies TIDY's belief in (1). TIDY's underlying reasoning is that the lab members should clean the lab because it's the best plan for the goal of keeping the lab clean, and it's the best plan because it's the only plan that achieves the goal.</Paragraph>
    <Paragraph position="10"> Finally, in (4), Scs.uFta'y states a belief that paying a janitor achieves the goal of keeping the lab clean. This contradicts TIDY's stated belief in (3). It also justifies a belief that the lab members cleaning the lab isn't the best plan for keeping the lab clean, which contradict~ one of the beliefs inferred from (3). SCRUFFY's reasoning is that paying a janitor is a more desirable plan that achieves thin goal. The remaining responses follow the same pattern.</Paragraph>
    <Paragraph position="11"> Understanding responses like these involves relating a stated belief to beliefs appearing earlier in the dialog. That requires inferring the participant's underlying reasoning chain and the beliefs it justifies. Producing these responses involves selecting a belief to justify and deciding upon the set of beliefs to provide as its justification. That requires constructing an appropriate reasoning chain that justifies holding any unshared beliefs.</Paragraph>
    <Paragraph position="12"> Our focus in this paper is on an initial method for representing, recognising, and producing the belief justifications underlying dialog responses that provide coherent defenses of why beliefs are held. ACRES DE COLING-92, NANTES, 23-28 AOl~'r 1992 9 0 6 PROC. OF COLING-92, NANTES, AUG. 23-28. 1992 The behavior modeled it limited in several significant ways. FirJt, we do not try to recognite when an trguer's response contradicts one of his earlier responses, such as the contradiction between (2) and (8), nor do we try to avoid producing such responses. Second, we do not try to recngnise or make use of high-level arguing strategies, such as reductio ed ab*urdum. Third, we restrict ourselves to a small clam of beliefs involving planning. Finally, we start with representatin~ of beliefs and ignore the linguistic issues involved in turning responses into be\]ida. Clearly, all these limitations must eventually be exidressed in order to produce a more realistic model of debate. Our belief, however, it that an initial model of the process of rccognising and producing belief justifications is a useful and necessary first step.</Paragraph>
  </Section>
  <Section position="4" start_page="0" end_page="0" type="metho">
    <SectionTitle>
2 Our Approach
</SectionTitle>
    <Paragraph position="0"> Our approach to these tasks rests on a simple assumption: Dialog participants jusLif~ beliefs with instantialions of general, common-sense justification ra/es. For plan-oriented beliefs, a justification rule corresponds to a planning heuristic that's based solely on structural features of plans in general, not on characteristics of specifc plans themselves.</Paragraph>
    <Paragraph position="1"> The first few responses in this dialog illustrate several justification rules. In (2), SCRUI~F'C/ uses the rule: O~e re.on wh~ a plan shouldn'~ be ezecuted is that it conflicts with assenting a more desirable plan.</Paragraph>
    <Paragraph position="2"> Similarly, in (3), TXDY chains together a pair of these rules: One reason why a plan should be ezecuted is that it's the be,t plan/or achieving a goal, and One reason why a plan il the be,t plan for a goal is that if'# the onl~ plan that achieves the goal.</Paragraph>
    <Paragraph position="3"> Given our assumption, understanding a response it equivalent to recogniting which justification rules were chained together and instantiated to form it, determining which belief to address in a response it equivalent to determining which beliefs in a chain of instantiated justification rules axe not shared, and producing a justification is equivalent to selecting and instantiating justification rules with beliefs from memory.</Paragraph>
    <Paragraph position="4"> We make this assumption for two reasons. First, dialog participants should be able to understand and respond to never before seen belief justifications.</Paragraph>
    <Paragraph position="5"> That suggests applying general knowledge, such as our jtmtification rules, to analyse and produce specific juJtifications, as that knowledge is likely to be shared by different participants, even if they hold different beliefs about specific courses of action. And second, dialog parlieipants should abo be able io use the same knowledge for different foJks. That suggests that arguments about planning should use the Msne knowledge as planning itselPS The justifiestion rules for plan-oriented belief1 describe knowledge that a planner would aim find nsdul in welectlng or constructing new plans.</Paragraph>
    <Paragraph position="6"> Our approach diffem in two ways fzom previons modeh of participating in dialogs. First, the*C/ models emphe~ised plan recognition: the task of recognising and inferring the underlying plans and goalJ of a dialog paxtlcipant \[4, 10, 17, 18, 2\]. They view utternnces as providing steps in plans (typically by describing goals or actions) and tie them together by inferring an underlying plan. But in an argument not only must the participant's plans and goals be inferred, but alto their underlying belie/s about those plans and goals. Our approach suggests a model that infers these beliefs as a natural consequence of trying to understand connections between successive diMog utterances. In contrast, existing approaches to inferring participant beliefs take a stated belief and try to reason about possible justifications for it \[12, 9\]. Previous models have also tended to view providing a dialog response solely as a part of the question answering process. In contrast, our approach suggests that responses arise as a natural consequence of trying to integrate newly-encountered beliefs with current beliefs in memory, and trying to understand any contradictions that result.</Paragraph>
  </Section>
  <Section position="5" start_page="0" end_page="0" type="metho">
    <SectionTitle>
3 Justification Rules
</SectionTitle>
    <Paragraph position="0"> The argumentative dialogs we've examined have two types of plan-oriented beliefs: facts61 and evalusflee \[1\]. Factual beliefs are objective judgements about planning relationships, such as whether a plan has a particular effect or enablement. They represent the planning knowledge held by moat previous plan-understanding and plan-constructing systems.</Paragraph>
    <Paragraph position="1"> Evaluative beliefs, on the other hand, are subjective judgements about planning relationJhipe, such as whether or not a plan should be executed. Although these beliefs have generally been ignored by previous systems, they are crucial to participating in arguments involving plan-oriented beliefs.</Paragraph>
    <Paragraph position="2"> Our assumption is there exists a small set of justification rules for each planning relationship. Each rule is represented as an abstract configuration of planning relationships that, when instantiated, provides a reason for holding a particular belief. For example, the rule that a plan shouldn't be executed if it conflicts with a preferred plan is represented as:  evaluative beliefs (~ee \[13\] tbr representational details and criteria for dedding what is a reasonable justification rule). These rulC/~ were abstracted from examining a variety of different plan-oriented argumentative dialogs.</Paragraph>
    <Paragraph position="3"> The power of these justification rules comes from their generality: A single rule can be instantiated in different ways to provide justifications for different beliefs. In (2), SCRUFFY USes the above rule to justify a belief that the lab members shouldn't clean the lab themselves. In (7), TIDY uses the same rule to justify a belief that the lab members shouldn't transfer money front the salary fnnd. Here, TIDY's justification is that tranderring the money interferes with the more desirable plan of paying researchers.</Paragraph>
  </Section>
  <Section position="6" start_page="0" end_page="0" type="metho">
    <SectionTitle>
4 Recognizing Justifications
</SectionTitle>
    <Paragraph position="0"> The proee~ of understanding a dialog response is modeled as a forwar&amp;chaining search for a chain of instantiated justification rules that (1) contains the user~s stated belief, and (2) jastifies an earlier dialog belief or its negation.</Paragraph>
    <Paragraph position="1"> We briefly illustrate this proce~ by showing how SCRUt'FY understands TIDe's response in (3). The input belief is that the lab members denning the lab is the only plan that achieves the goal of keeping the lab clean. This belief matches an antecedent in a pair of justification rules, so the process begins by inetantiating these rules, resulting in pair of possible justification chains that contain TIDY's stated belief: (1) the lab members cleaning is the beef plast for ~eplag the lab clean becalst it's the only pianist keeping the lab clean, and (2) the lab shonldntl ~ kept c/cart because the only plan for that goal is the wades~ble plan of having the lab members cleaning iL Neither justification directly relates to the dialog, so the next step is to determine which one to pursue further, and whether either can be eliminated from further consideration. Here, the second justification contains a belief that the lab members cleaning the lab is undesirable, which contradicts TIDY's stated belief in (1). Applying the heuristic &amp;quot;D/aeard any potential justification containing beliefs that contradict the speaker's earlier beliefs&amp;quot; leaves only the first justification to pursue further. It's consequent in the antecedent of a single justification rule, and instantinting tiffs rule leads to this justification chain: the lab members should clean the lab because their elear~.</Paragraph>
    <Paragraph position="2"> lag the lab is the best plan for the goal of keeping the lab clear* because it's the only plan for keeping tlAe lab clean. The justified belief is TIDY's belief in (1), so the process stops.</Paragraph>
    <Paragraph position="3"> In general, the understanding proceu it more complex, since justification rules may not be completely instantiated by a single antecedent, and may therefore need to be further iastantiated from beliefs in the dialog context and memory. There ahm may be many possible chains to pursue even e~ter heuristically discarding some of them, requiring the tree of other heuristics to determine which path to follow, such as &amp;quot;Pursue the reasoning chain whidt eoltains the most beliefs found in the dialog eontea~. ~</Paragraph>
  </Section>
  <Section position="7" start_page="0" end_page="0" type="metho">
    <SectionTitle>
5 Selecting A Belief To Justify
</SectionTitle>
    <Paragraph position="0"> After recognizing a participant's reasoning chain, it's necessary to select a belief to justify as a response.</Paragraph>
    <Paragraph position="1"> This task involves determining which beliefs are not shared, and selecting the negation of one of tho~ beliefs to justify.</Paragraph>
    <Paragraph position="2"> An intuitive notion of agreement is that a belief is shared if it it's found in memory or can be justified, and it's not shared if its negation it found in memory or can be justified. But this notion is computationally expensive, since it could conceivably in. volvo trying to justify all the beliefs in the lmrtieipant'a reasoning chain, as well as their negatinas. As ml alternative, our model determines whether a belief is shared by searching memory for the belief and its negation and, if that fails, applying a small Acrl~s DE COLING-gZ NarcH~s, 23-28 Ao(rf 1992 9 0 8 PROC. OF COLING-92. NANTES. AUG. 23-28, 1992 set of agreement heuristics. One such heuristic is &amp;quot;Assume a belief is sassed if a justil~ling geaera//zalion is found in tattooer. So, for exanlpie, if the belief &amp;quot;keep everything clean&amp;quot; is found in memory, the belief *keep the AI lab clean ~ is considered to he shared. If no agreement heuristic applies, the belief is simply marked as Uunknown&amp;quot;.</Paragraph>
    <Paragraph position="3"> After determining whether each belief in the participant's reasoning chain is shared, the model first searches for an existing justification for an unshared belief's negation. If that fails, it then tries to create a new justification for an unshared belief's negation.</Paragraph>
    <Paragraph position="4"> And if that fails, it tries to create a new justification for the negation of one of the unknown beliefs. This way existing justifications are presented before an attempt is made to construct new ones. If none of these steps succeed, the assumption is that the rea-Boning chain is shared, and an attempt is made to form a new justification for the belief it contradicts.</Paragraph>
    <Paragraph position="5"> Thus, the belief our model addresses in a response arises from trying to discover whether or not it agrees with another participant's reasoning.</Paragraph>
  </Section>
  <Section position="8" start_page="0" end_page="0" type="metho">
    <SectionTitle>
6 Forming Justifications
</SectionTitle>
    <Paragraph position="0"> To form a new justification for a belief, our model performs a backward chaining search fo~&amp;quot; a chain of justification rules that justify the given belief and that can be iustantiated with beliefs from memory.</Paragraph>
    <Paragraph position="1"> We briefly illustrate this process by showing how SCRU~'Fy forms the response in (2). The belief to justify is that it's not desirable to have the lab members clean the lab. The first step is to instantiate the justification rules that have this belief as their consequent. That results in several possible justifications: (1) there's an undesirable enablement of cleaning the lab, (2) there's an undesirable effecf of cleaning the lab, or (3) the lab members cleaning the lab conflicts with a more desirable action.</Paragraph>
    <Paragraph position="2"> The next step is to try to fully iastantiate one of these rules. Applying the heuristic &amp;quot;Pursue the most instantiafed justification rule&amp;quot; suggests working on the last rule. Here, SCRUFFY instantiates it with a belief from memory that research is more desirable than cleaning. Once a rule is instantiated, it's necessary to verify that the beliefs it contains are shared. Here, that involves verifying that cleaning conflicts with research. It does, so the instantiated rule can be presented an the response.</Paragraph>
    <Paragraph position="3"> In general, the process is more complex than outlined here, since not all of the belief in an iustantiated justification rule may be shared, and there may be several ways to instantiate a particular rule. Those rules containing unknown beliefs require further justification, while those rules containing unshared be~ lids can be discarded.</Paragraph>
  </Section>
  <Section position="9" start_page="0" end_page="0" type="metho">
    <SectionTitle>
7 Background
</SectionTitle>
    <Paragraph position="0"> The closest related system is ABDUL/ILANA \[8\], which debated the responsibility mad cause for hlstotical events. It focused on the complementary problem of recogniling and providing episodic justifiC/~ tions, rather than justifications b~ed on the rel~.</Paragraph>
    <Paragraph position="1"> tionships between different plans.</Paragraph>
    <Paragraph position="2"> There are several models for recognising the r(c)lationship between argument propositions. Cobea's \[5\] taken each new belief and checks it for a justification relationship with a subset of the previnuslystated belief~ determined through the use of dip sing structure and clue words. That model tureen the existence of an evidence oracle capable of determining whether a justification relationship holds between may pair of beliefs. Our model ira.</Paragraph>
    <Paragraph position="3"> plements this oracle for a particular clam of plan-oriented belief justifications. OpFkt \[3\] recogniset bo. lief justifications in editorials almut economic planning through the use of argument units, a knowb edge structure that can be viewed as complex cow figurations of justification rules. The approaches are complementary, just as scripts \[7\] and plans \[6, 18\] are both useful methods for recognising the cam nections between events in a narrative.</Paragraph>
    <Paragraph position="4"> Several systems have concentrated on producing belief justifications. Our own earlier work \[14, 15, 16\] used a primitive form of j~tstification rules for factual beliefs as a template for producing corre~ tive responses for user misconceptions. Our current model extends this work to use these rules in both understanding and responding, and provides additional rules for evaluative beliefs.</Paragraph>
    <Paragraph position="5"> ROMPER \[11\] providas justifications for belidk about an object's class or attributes. But it profides these justifications purely by template matching, not by constructing more general reasoning chains.</Paragraph>
  </Section>
  <Section position="10" start_page="0" end_page="0" type="metho">
    <SectionTitle>
8 Current Status
</SectionTitle>
    <Paragraph position="0"> We've completely implemented the model di~umed in this paper. The program is written in Quintu~ Prolog and runs on an lIP/APOLLO workstation.</Paragraph>
    <Paragraph position="1"> Its input is a representation for a stated participant belief, and its output is a representation for m, up.</Paragraph>
    <Paragraph position="2"> propriate response. It currently includes 30 justitka~ tips rules and over 400 beliefs about various plans.</Paragraph>
    <Paragraph position="3"> We've used the program to participate in short ar-.</Paragraph>
    <Paragraph position="4"> gumentative dialogs in two disparate domains: dayto~day planning in the A! lab, and removing and recovering files in UNIX. We're currently using it to experiment with different heuristics for controlling the search process involved in rer.ognisbtg and c~u.strutting these reasoning ch~in~.</Paragraph>
    <Paragraph position="5"> Our xxmdcl he~ eevt.L'~l /~ey Ib~dt~tion~ ~e e~e c,~_dy AcrEs DE COLING-92, NAN1q~, 23-28 hofrr 1992 9 0 9 P toct 1: COLING.9&amp;quot;~ ~qt, l,~'n!s, Aut;. 23-28, 1992 now starting to addrem. First, it views plans as atomic units and comiders only a small set of &amp;quot;all or nothing&amp;quot; plan-oriented beliefs. This means it can't produce or understand justifications involving atel~ in a plan, conditional planning relationships, or beliefs not directly involving plans. Second, our model can understand only those responses that jnstify an earlier belief. It can't, for example, understand a response that contradicts an inferred justification for an earlier belief. These more complex relationships can be represented using juetificntinn rules, but our model must be extended to recognise them. Third, our model is reactive rather than initiatory: it produces respon~ only when there's n perceived din-.</Paragraph>
    <Paragraph position="6"> agreement. It needs to be extended to know why its in an argument, and to be aware of the underlying goals of the other argument participants.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML