File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/00/w00-1406_metho.xml
Size: 30,894 bytes
Last Modified: 2025-10-06 14:07:25
<?xml version="1.0" standalone="yes"?> <Paper uid="W00-1406"> <Title>Towards the Generations.of Rebul;tals in a Bayesian Argumentation System</Title> <Section position="4" start_page="0" end_page="39" type="metho"> <SectionTitle> 2'- Knowledge Representation </SectionTitle> <Paragraph position="0"> During the argumentation process, BIAS maintains two models of belief: a normative model and a user model,eaeh-of-which is-represented as a BN. The normative model contains information gathered directly by BIAS from the murder scenario, while the user model stores propositions that are presumed to Preamble: Mr. Body was found.dead in his bedroom, which is in .the.seecond .story. of.his.house. Bulletwounds were found in Mr. Body's body. The bedroom window was broken and broken glass was found inside the window. A gun was found on the premises, and some fingerprints were found on the gun. In addition, inspection of the. grounds revealed footprints in the garden and circular indentations in the ground outside the bedroom window.</Paragraph> <Paragraph position="1"> BIAS' argument: Bullets being found in Mr Body's body implies Mr Body was almost certainly shot. This implies he was almost certainly murdered.</Paragraph> <Paragraph position="2"> Forensics matching the bullets with the found gun implies the gun is almost certainly the murder weapon. Forensics matching the fingerprints,witth Mr.Gr.een implies_Mr Gregn~ probably fired the gun. This together with the gun almost certainly being the murder weapon implies Mr Green probably fired the murder weapon, which implies he very probably had the means to murder Mr Body. The Bayesian Times reporting Mr Body took Mr Green's girlfriend implies Mr Green and Mr Body possibly were enemies, which implies Mr Green possibly had a motive to murder Mr Body. The neighbour reporting Mr Green not being in the garden at 11 implies Mr Green very probably wasn't in the garden at 11.</Paragraph> <Paragraph position="3"> Forensics reporting the time of death being 11 and the forensic analysis of the time of death being reliable implies the time of death was probably 11, which together with Mr Green very probably not being in the garden at 11 implies he probably wasn't in the garden at the time of death. This implies he probably didn't have the opportunity to murder Mr Body.</Paragraph> <Paragraph position="4"> Even though Mr Green very probably had the means to murder Mr Body and he possibly had a motive to murder Mr Body, Mr Green probably not having the opportunity to murder Mr Body implies he probably didn't murder Mr Body.</Paragraph> <Paragraph position="5"> User's rejoinder: Consider that the found gun is available only to Mr Green. BIAS' rebuttal: Actually, it is very improbable that the found gun is available only to Mr Green. However, even if it was available only to Mr Green, this would have only a small effect on the likelihood that Mr Green murdered Mr Body. This is for the following reason.</Paragraph> <Paragraph position="6"> The found gun being available only to Mr Green implies it is more likely that Mr Green fired the gun, making it almost certain. This implies it is more likely that he fired the murder weapon, making it almost certain, which implies it is even more likely that he had the means to murder Mr Body. This implies it is only slightly more likely that he murdered Mr Body.</Paragraph> <Paragraph position="7"> be believed by the user. These propositions may be obtained from a variety of sources, e.g., they may have been inspected by the user in the murder scenario (by means of a WWW interface), or appear in BIAS' previous arguments or the user's rejoinders.</Paragraph> <Paragraph position="8"> Arguments generated by BIAS are represented by means of an Argument Graph - a sub-network of the normative model BN which ideally also contains nodes from the user model BN.</Paragraph> <Paragraph position="9"> The interpretation process, where BIAS identifies the reasoning path intended by the user, takes place in the user model; since ,BIAS tries, to .%nake sense&quot;of what the user is saying according to the system's view of the user's beliefs (Zukerman et al., 2000).</Paragraph> <Paragraph position="10"> In contrast, the processes for generating the initial argument and the rebuttals consult both tile user model and the normative model to produce arguments that rely on beliefs held by both BIAS and tile user if possible. Further, during rebuttal generation, the choice of a rebuttal strategy depends on the intended effect of the user's argument (according to the user model) and its actual effect (according to the normative model).</Paragraph> </Section> <Section position="5" start_page="39" end_page="40" type="metho"> <SectionTitle> 3 Determining a User's Line of </SectionTitle> <Paragraph position="0"/> <Section position="1" start_page="39" end_page="40" type="sub_section"> <SectionTitle> Reasoning </SectionTitle> <Paragraph position="0"> Our procedure for recognizing a user's intended line of reasoning from his/her rejoinder receives two inputs: a linguistic clue (&quot;but&quot; or &quot;consider&quot;) and a rejoinder proposition (R), e.g., &quot;but Mr Green was in the garden&quot;~..It then-finds paths in the user model BN that connect R to the goal proposition (Zukerman et al., 2000). During this process, BIAS copes with inference patterns that are different from its own by allowing inferred paths to contain a small &quot;gap&quot; composed of propositions that did not exist previously in the user model. Figure 2(a) illustrates an Argument Graph, a rejoinder R, and path R-I-M-E-A-G between them (composed of grey nodes). This path, called userPath, represents the line of reasoning intended by the user. The gap in this path contains nodes I and M (in italics), which means that the user inferred E directly from R.</Paragraph> <Paragraph position="1"> Each path is assigned a score based on the following factors: the impact of R on BIAS' argument along this path, whether path nodes are in the user's attentional focus, and BIAS' confidence in this path (determined from the information source of the nodes in this path, e.g., whether the user has seen the propositions in the path, asserted a belief about them or read them in BIAS' arguments).</Paragraph> <Paragraph position="2"> BIAS then selects the highest-scoring path. If several paths have a high score, the user is asked to choose one of them. Typically,-BIAS returns a single path, and sometimes it returns two or three paths.</Paragraph> <Paragraph position="3"> Hence, presenting them to the user for selection is a reasonable course of action.</Paragraph> </Section> </Section> <Section position="6" start_page="40" end_page="44" type="metho"> <SectionTitle> 4 Rebuttal Generation </SectionTitle> <Paragraph position="0"> Given a user's rejoinder proposition R, we consider three main types of rebuttals: (1) refute R, (2) dis- null (b) Refute R: R = userVal has large effect on G; BIAS and the user disagree on R - i-._@ (d) Strengthen G: R = userVal has large effect on G; BIAS and the user agree on R</Paragraph> <Section position="1" start_page="40" end_page="40" type="sub_section"> <SectionTitle> Graph and Rejoinder Strategies </SectionTitle> <Paragraph position="0"> miss the line of reasoning intended by the user (user-Path), and (3) strengthen the argument goal G. Diagrammatic representations of these rebuttal strategies and abridged versions of their applicability conditions appear in Figure 2(b-d). These conditions, which are specified in the following sections, depend oil (1) whether the rejoinder affects the system's argument directly or indirectly, (2) the beliefs in R in the normative and user models, and (3) the impact of R on the goal proposition along userPath in the normative and user models.</Paragraph> </Section> <Section position="2" start_page="40" end_page="41" type="sub_section"> <SectionTitle> 4.1 Refute the rejoinder </SectionTitle> <Paragraph position="0"> This strategy consists of generating an argument against the user's belief in the rejoinder proposition R. This strategy.isapplicable under the following conditions: ......</Paragraph> <Paragraph position="1"> (R1) The beliefs in R in the user model and the normative model differ significantly (the user's belief in R contradicts BIAS' belief); and (R2) Either (a) R was stated or implied in BIAS' argument (R appears in the Argument Graph), or (b) The belief in R stated by the user has a significant.effect on. the goal G in ,the normative model in the same direction as its effect on G in the user model.</Paragraph> <Paragraph position="2"> For example, if the user's rejoinder to the argument in Figure 1 was &quot;But Mr Green and Mr Body were not enemies&quot;, then conditions R1 and R2a would be satisfied, since the rejoinder directly contradicts what was stated by BIAS in the argument. If the user's rejoinder was &quot;But the neighbour saw Mr Green shoot ..Mr, Body~!-i.~then :.conditions-,R1 and R2b would be satisfied, since an inference from this rejoinder contradicts BIAS' belief in Mr Green's lack of opportunity to kill Mr Body (and consequently in Mr Green's guilt). The argument schema for the refute the rejoinder strategy and a sample rebuttal produced with this schema are shown in the rejoinder proposition is generated by activating our Bayesian argument generator (Zukerman et al., 1998) with the proposition Mr Green and Mr Body were enemies as the goal. In this case, the belief in the rejoinder node resulting from the sub-argument differs from that stated in the initial argument, owing to the additional information included in the sub-argument. Hence, the implications from the rejoinder node are followed. The procedure for following these implications is described in Section 4.2.</Paragraph> </Section> <Section position="3" start_page="41" end_page="42" type="sub_section"> <SectionTitle> 4.2 Dismiss the user's line of reasoning </SectionTitle> <Paragraph position="0"> This strategy consists of showing the user how his/her argument fails to achieve its intended effect. We distinguish between concessive and contradictory dismissals. The former is used when the system agrees with the rejoinder proposition R, and the latter when the system disagrees with R. This strategy is applicable under the following condition: (D) R does not significantly affect the belief in G in the normative model.</Paragraph> <Paragraph position="1"> This condition is illustrated by the rejoinder to the argument in Figure 1, &quot;Consider that the found gun was available only to Mr Green&quot;, which purports to increase the belief in Mr Green's means to kill Mr Body, and hence Mr Green's guilt. However, since this increment is quite small, BIAS adopts the dis:.</Paragraph> <Paragraph position="2"> missal strategy, which follows the effect of the user's rejoinder through the user's line of reasoning, pointing out how the effect of the rejoinder differs from its intended effect. It is worth noting that the main difference between a dismissal and a strengthening of the goal is that BIAS decides to generate a dismissal when its current beliefs are sufficient to invalidate the user's line of reasoning, whereas it decides to aThe rejoinders shown in this paper are posed by the user immediately after the argument in Figure 1.</Paragraph> <Paragraph position="3"> Refute R: t. Deny, the behef in&quot;R stated by the'user.</Paragraph> <Paragraph position="4"> 2. Present a sub-argument for the normative belief in R.</Paragraph> <Paragraph position="5"> 3. If R is not in the Argument Graph or the belief in R as a result of the sub-argument differs from that originally stated by BIAS, then follow the effect of R along userPath up to the first node in the Argument Graph .... vchose belief.is, the ~ same.as ..that stated in the initial argument.</Paragraph> <Paragraph position="6"> Rejoinder: But Mr Green and Mr Body were not enemies.</Paragraph> <Paragraph position="7"> Rebuttal: Actually, it is quite likely that Mr Green and Mr Body were enemies. This is for the following reason.</Paragraph> <Paragraph position="8"> The forensic analysis of the blue paint being reliable and forensics having found some blue paint which they estimate is one week old implies a blue car was here last week. This together with Mr Green having a blue car implies Mr Green's car was almost certainly here last week, which implies Mr Green almost certainly visited Mr Body last week.</Paragraph> <Paragraph position="9"> The neighbour being sober implies she is very probably reliable. This together with the neighbour reporting Mr Green arguing with Mr Body last week implies the neighbour very probably heard Mr Green arguing with Mr Body last week, which together with Mr Green almost certainly visiting Mr Body last week implies he almost certainly argued with Mr Body.</Paragraph> <Paragraph position="10"> The Bayesian Times reporting Mr Body took Mr Green's girlfriend implies Mr Body probably seduced Mr Green's girlfriend. This together with Mr Green almost certainly arguing with Mr Body implies Mr Green and Mr Body probably were enemies.</Paragraph> <Paragraph position="11"> Let's now go back to the main argument.</Paragraph> <Paragraph position="12"> Mr Green and Mr Body probably being enemies implies it is more likely that Mr Green had a motive to murder Mr Body, making it rather likely. This implies it is only slightly more likely that he murdered Mr Body.</Paragraph> <Paragraph position="13"> strengthen the goal when additional information is required to defeat the impact of the user's rejoinder. Our algorithm for dismissing the user's line of reasoiling follows userPath until it reaches a point where the user's line of reasoning fails, i.e., it has no effect on a proposition on userPath in tile Argument Graph. It is necessary for the rebuttal to reach the Argument Graph even if the failure of the rejoinder occurs earlier in userPathrbecause the user's ~ejoin= . der refers to the argument, hence at least one proposition in the argument must be mentioned when addressing the impact of this rejoinder.</Paragraph> <Paragraph position="14"> The user's line of reasoning may fail due to the following factors: (1) s/he did not consider propositions that have a significant effect on the propositions in userPath; or (2) his/her belief in one or more of the propositions s/he did consider differs significantly from thatSn t.he normative model, .and this proposition has a substantial ~effect on a pr,515o--:: sition in userPath. Propositions of the first type are included in a set called SIGneighbours, and propositions of the second type are included in DIFFneighbouts. Our dismissal algorithm calls our Bayesian argument generator to generate sub-arguments for the propositions in DIFFneighbours, but simply presents the propositions in SIGneighbours without arguing for them.</Paragraph> <Paragraph position="15"> 1. Fori=ltondo: (a) Set SIGneighbours(Pi) to the nodes that are linked to Pi in the normative model but not in the user model and have a significant effect on the belief in Pi.</Paragraph> <Paragraph position="16"> (b) If the belief in Pi in the user model differs significantly from the belief in Pi in the normative model, then set DIFFneighbours(Pi) to the nodes that are linked to Pi in both the user model and the normative model and which have a different belief in the user model from that in the normative model.</Paragraph> <Paragraph position="17"> (c) For each node Pj E DIFFneighbours(Pi) generate a sub-argument for the normative belief in Pj.</Paragraph> <Paragraph position="18"> 2. Present. the resulting rebuttal using the appropriate schema, DismissContradict or Dismiss null Concede (Figures 4 and 5 and respectively).</Paragraph> <Paragraph position="19"> Our concessive schema differs fi'om our contradictory&quot; schema in two respects. Firstly,, the former acknowledges the user's rejoinder, while the latter denies it. In addition, the concessive schema follows the user's line of reasoning-starting,from the normative belief in the rejoinder proposition (which is close to the belief indicated by the user), while the contradictory&quot; schema follows a hypothetical line of reasoning starting from the user's belief in the rejoinder proposition (which differs substantially from the normative belief). In both cases the user's line of reasoning fizzles out, due to its small effect on the DismissContradict userPath: ...... t&quot;.-Deny -tiie~betiefdn. ~ - stated .............. -'by the user,&quot; and dismiss its hypothetical effect on the goal proposition.</Paragraph> <Paragraph position="20"> 2. Present the sub-arguments for the nodes in DIFFneighbours.</Paragraph> <Paragraph position="21"> 3. FollowPath userPath from the rejoinder proposition to the goal.</Paragraph> <Paragraph position="22"> FollowPath userPath For i:-= 0 to:.n.X-~:t.,.(whe/'e'n is-i;h&i4umber of nodes in userPath) do: 1. If Pi+l is not in the Argument Graph or DIFFneighbours(Pi+l)C/O, then present an implication from Pi to Pi+l which includes the nodes linked to Pi+l in the user model plus the nodes in SIGneighbours(Pi+l ).</Paragraph> <Paragraph position="23"> Else present an implication which reflects only the relative impact of Pi on Pi+l.</Paragraph> <Paragraph position="24"> 2. If the resulting belief in Pi+l is the same as that stated in the initial argument, then</Paragraph> </Section> <Section position="4" start_page="42" end_page="43" type="sub_section"> <SectionTitle> Path Procedure </SectionTitle> <Paragraph position="0"> goal according to the normative model irrespective of its truth value.</Paragraph> <Paragraph position="1"> Both schemas follow userPath from tile rejoinder proposition to the goal using procedure FollowPath (Figure 4). This procedure distinguishes between propositions in userPath for which the main influencing factors (DIFFneighbours and SIGneighbours) should be presented, and those which require only information regarding the relative impact of the preceding proposition in userPath. The latter propositions are characterized as follows: (1) they appear in the Argument Graph; and (2) the user's beliefs in the nodes outside userPath that have a significant effect on these propositions are consistent with the normative beliefs in these nodes. For instance, the rebuttal in Figure 1~ which is generated by means of the DismissContradict schema, presents the relative influence of Mr Green fired the gun on Mr Green fired the murder weapon, since the user and BIAS hold consistent beliefs regarding the gun being the murder weapon.</Paragraph> <Paragraph position="2"> * &quot; To iltustratte&quot;t.he operation 'Of t-he dismissal algorithm, let us consider the rejoinder &quot;But the time of death was 11&quot;, which yields the following line of reasoning: The time of death was 11 (~ Mr Green was in the garden at 11) ~ Mr Green was in the .qarden at the time of death + Mr Green had the opportunity to kill Mr Body ---+ Mr Green killed Mr Body. DIFFneighbours includes only one proposition, Mr DismissConcede userPath: 1. Acknowledge.the,belief&quot;in'-R stated bythe user, and dismiss its effect on the goal proposition.</Paragraph> <Paragraph position="3"> 2. Present the sub-arguments for the nodes in DIFFneighbours.</Paragraph> <Paragraph position="4"> 3. FollowPath userPath from the rejoinder proposition to the goal.</Paragraph> <Paragraph position="5"> Rejoinder: But the time of death was 11.</Paragraph> <Paragraph position="6"> Rebuttal: -: ..........</Paragraph> <Paragraph position="7"> Indeed, it is quite likely but not entirely certain that the time of death was 11. However, this has only a small effect on the likelihood that Mr Green murdered Mr Body.</Paragraph> <Paragraph position="8"> I will show that Mr Green almost certainly wasn't in the garden at 11.</Paragraph> <Paragraph position="9"> Mr Green's witness not being related to Mr Green implies she is very probably reliable.</Paragraph> <Paragraph position="10"> This together with Mr Green's witness reporting Mr Green being at the football at 10:30 implies Mr Green was almost certainly at the football at 10:30.</Paragraph> <Paragraph position="11"> The neighbour being sober implies she is almost certainly reliable. This together with the neighbour reporting Mr Green not being in the garden at 11 implies the neighbour never saw Mr Green in the garden at 11, which together with Mr Green almost certainly being at the football at 10:30 implies he almost certainly wasn't in the garden at 11.</Paragraph> <Paragraph position="12"> Let's now go back to the main argument.</Paragraph> <Paragraph position="13"> Even though the time of death was probably 11, Mr Green almost certainly not being in the garden at 11 implies it is only slightly less likely that he was in the garden at the time of death. This implies it is only slightly less likely that he had the opportunity to murder Mr Body, which implies it is only slightly less likely that he murdered Mr Body.</Paragraph> <Paragraph position="14"> Green was in the garden at 11, since the belief in it in the normative model differs from that in the user model, thereby prompting the generation of a sub-argument for this proposition.. This sub-argument is stronger than that incorporated in the initial argument, yielding a belief in Mr Green.being in the garden at 11 that is lower than the belief indicated in the original argument, which in turn reduces the belief in Mr Green being in the garden at the time of death, Mr Green having the opportunity to kill Mr Body, and Mr Green actually nmrdering Mr Body.</Paragraph> <Paragraph position="15"> The resulting rebuttal, which is presented by means of the DismissConcede schema, appears in Figure 5.</Paragraph> </Section> <Section position="5" start_page="43" end_page="44" type="sub_section"> <SectionTitle> 4.3 Strengthen the goal </SectionTitle> <Paragraph position="0"> : .:This: strategy, consist~-of.germrafing a stronger argument for the original goal proposition G, bringing to bear information that did not appear in the initial argument (either because BIAS was unaware of it or because BIAS chose to exclude it from the argument). This strategy is applicable under the following conditions: (G1) The beliefs in R in the normative and user models are consistent; and (G2) Rhas a=Substantia\] detrimentgI effect on the belief in G in the normative model. This change in belief should be in the same direction as the change occurring in the user model.</Paragraph> <Paragraph position="1"> These conditions represent a situation where the system did not take into account a particular fact, but when this fact comes to its attention the system realizes the effect of this fact on the goal. For instance, if the user discovers new evidence that places Mr Green in the garden at 11, a rejoinder which presents this proposition will increase the belief in Mr Green's opportunity to kill Mr Body along the following line of reasoning: Mr Green was in the garden at 11 -+ Mr Green was in the garden at the time of death --+ Mr Green had the opportunity to kill Mr Body ~ Mr Green killed Mr Body. In this case, BIAS tries to strengthen the argument for Mr Green's innocence by arguing separately against propositions along this line of reasoning (other than the rejoinder node, which is true in this example).</Paragraph> <Paragraph position="2"> If no sub-argument can be generated for these nodes or the generated sub-arguments do not significantly affect the goal, then BIAS agrees with the user.</Paragraph> <Paragraph position="3"> Our algorithm for strengthening the goal searches along userPath for propositions that have been affected by the rejoinder, but that will reinforce BIAS' goal proposition if their belief is changed. It then tries to generate sub-arguments that change the beliefs in these propositions. In order to localize the effect of the user's rejoinder, the search and sub-argument generation processes start at R and proceed towards the goal. The presentation of the rebuttal is also done in this order, using a procedure which is similar to the FollowPath procedure described in Section 4.2.</Paragraph> <Paragraph position="4"> Algorithm StrengthenGoal( userPath) Let userPath be composed of propositions R=Po---~ Pi ~ P2-+...-~ Pn=G.</Paragraph> <Paragraph position="5"> 1. For i = 1 to n, while the belief in G is not as intended by BIAS, do: (a) Determine which belief in Pi will move the belief in G in the normative model in the direction intended by BIAS.</Paragraph> <Paragraph position="6"> (b) If this belief in Pi differs from the current belief in Pi, then i. Generate a sub-argument for the desired belief in Pi.</Paragraph> <Paragraph position="7"> ii. If the sub-argument yields a significant change in the belief in Pi or in the belief in G then store the sub-argument in SubAG(P~).</Paragraph> <Paragraph position="8"> 2. Present the resulting rebuttal (composed of the user's line of reasoning and intervening subarguments) using the StrengthenGoal schema in Figure 6.</Paragraph> <Paragraph position="9"> To illustrate the operation of this algorithm, let us reconsider the rejoinder &quot;Consider Mr Green was in the garden at 11&quot;, and let us assume that the rejoinder proposition is true. Inspection of the propositions affected by this rejoinder reveals that if Mr Green was not in the garden at the time of death, then the belief in the goal would be closer to that intended by BIAS. However, an argument for this proposition cannot be generated. Hence, BIAS proceeds to the next proposition, Mr Green had the opportunity to murder Mr Body, and calls our Bayesian argument generator to generate an argument that contradicts this proposition. The Bayesian generator produces an argument which reduces the belief in this proposition. However, this belief cannot be reduced to the extent that it exculpates Mr Green. Thus, BIAS attempts to generate an argument for the goal node (by trying to reduce the belief in Mr Green's means and motive to kill Mr Body). However, this attempt also fails, leaving BIAS with a moderate belief in Mr Green's guilt. 4 It is important to note that although BIAS' immediate objective is to strengthen its belief in the goal proposition, its primary purpose is to &quot;tell the truth&quot; to the best of its knowledge (which may contradict its initial beliefs), rather than to win the argument at all costs. Our algorithm supports this attitude by retaining any sub-argument which has a significant impact on the goal or on a proposition on userPath. We use this disjunctive condition on impacts in order to address a situation where a proposition Pj on userPath has been affected by a sub-argument, but does not affect the goal because of an inaccurate belief in aproposition Pk which appears later on userPath:(recalt that the propositions are inspected from R towards the goal). However, StrengthenGoal userPath: .... t-. ~Aekn0wledge~the.~etief.in'-R stated&quot;by the&quot; user, and set lastProposition to R.</Paragraph> <Paragraph position="10"> 2. Until the goal proposition is reached do: (a) If after lastProposition there is a proposition Pi EuserPath for which a sub-argument was generated (SubAG(Pi)C/ 0), then i. Follow userPath from lastProposition-to-Pi. ....</Paragraph> <Paragraph position="11"> ii. Present the sub-argument for Pi.</Paragraph> <Paragraph position="12"> iii. Set lastProposition to Pi.</Paragraph> <Paragraph position="13"> (b) Else follow the remainder of userPath.</Paragraph> </Section> </Section> <Section position="7" start_page="44" end_page="45" type="metho"> <SectionTitle> 5 Related Research </SectionTitle> <Paragraph position="0"> Our research builds on work described in (Zukerman et al., 1998), which generated arguments from BNs, and (Zukerman et al., 1999), which enabled a user to explore the impact of different propositions on the generated arguments. The former system only generated arguments, while the latter received instructions from a user (through a menu) about modifications to be performed to a previously generated argument, e.g., including or excluding a proposition, and then generated a new argument in response to these instructions. Neither of these systems generates rebuttals which take into account a user's intentions, as done by BIAS.</Paragraph> <Paragraph position="1"> Several researchers have dealt with different aspects of argumentation; e.g., (Flowers et al., 1982; Quilici, 1992; Chu-Carroll and Carberry, 1995; Carberry and Lambert, 1999). Like BIAS, the system described in Carberry and Lambert (1999) combined linguistic and contextual knowledge to recognize a user's intentions from rejoinders. However, their system did not generate rebuttals. Chu-Carroll and Carberry (1995) provided a comprehensive approach for proposal evaluation which focused on dialogue strategies rather than argumentation strategies. In additiom they considered exchanges where each participant utters one or two propositions in each conversational turn. In contrast, we focus on strategies for the generation of extended probabilistie rebuttals to individual rejoinders. In the future, our strategies once a sub-argument for Patispresented, then Pj af,: ;, *will be~'e0mbined with.'dialbgue stra~gies in a cornfects the goal. If BIAS accepted only sub-arguments for propositions which have a significant impact on the goal, then in this case it would miss the opportunity to strengthen the goal.</Paragraph> <Paragraph position="2"> 1The resulting argument has not been included owing to space limitations, plete argumentation system.</Paragraph> <Paragraph position="3"> Flowers et al. (1982) presented a partial theory of argumentation which advocated the combination of distinct knowledge sources; their implementation focused on recognizing and providing episodic justifications to historical events. Our focus oil the generation of rebuttals in the context of BNs allows us to provide an operational definition for the broad argumentation strategies discussed in the literature, e.g., attack the main point directly or attack thesupporting evidence (Flowers et al., 1982). 5 The argumentation system described in (Quilici, 1992) used a plan-based model of the user's beliefs to recognize the justification for-a user's proposal and provide its own justifications. However, the rebuttals generated by this system were based on a single strategy: applying backwards chaining using a set of justification rules. This strategy is a special case of the more general rebuttal Schemas presented here.</Paragraph> </Section> class="xml-element"></Paper>