File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/00/w00-1408_metho.xml

Size: 22,569 bytes

Last Modified: 2025-10-06 14:07:25

<?xml version="1.0" standalone="yes"?>
<Paper uid="W00-1408">
  <Title>Using Argumentation Strategies in Automated Argument Generation</Title>
  <Section position="5" start_page="56" end_page="59" type="metho">
    <SectionTitle>
4 Using Argumentation Strategies
</SectionTitle>
    <Paragraph position="0"> During the argument generation process, the Strategist performs the following actions: (1) determine the potential applicabihty of the different .argumentation strategies based on the beliefs in the nodes in the Argument Graph, (2) propose specific candidates for each apphcable.strategy, and. (3) select a concise argument among these candidates.</Paragraph>
    <Section position="1" start_page="56" end_page="57" type="sub_section">
      <SectionTitle>
4.1 Anticipating the effect of a node
</SectionTitle>
      <Paragraph position="0"> The Strategist selects an argumentation strategy based on the Analyzer's assessment of the effect of the nodes in the Argmnent Graph on the goal proposition (and vice versa). This effect is determined by means of a constrained Bayesian propagation scheme  2&amp;quot; in both the user model BN and the normative model BN. Specifically, for each node at the '!edge&amp;quot; of the Argument Graph, each new node (i.e., one added in the last generation step), and each previous node to which new links were added in the last step, the Analyzer calculates its positive and negative effect on the goal, and the positive and negative effect of the goal on this node. 4 The positive~negative effect of a node X on a node Y is the hypothetical belief in node Y after propagating a high/low belief in node X (which represents a true/false belief in the corresponding proposition). TheTositive/negative effect of a node on the goal is required to generate arguments by cases, and the positive/negative effect of the goal on a node is required to generate hypothetical arguments, viz reductio ad absurdum and inference to the best explanation. When computing positive/negative effects for a particular node, the Bayesian propagation process uses the prior beliefs of the other nodes in the Argument Graph.</Paragraph>
    </Section>
    <Section position="2" start_page="57" end_page="57" type="sub_section">
      <SectionTitle>
4.2 Determining applicable argumentation
</SectionTitle>
      <Paragraph position="0"> strategies After receiving the Analyzer's report, the Strategist checks the following conditions to determine the potential applicability of each argumentation strategy. 5 Reductio ad absurdum - The negation of the goal G undermines a proposition Q which is firmly believed independently of the goal (i.e., P(Q) = High, where Q is a premise or inferred from premises). Hence, P(QI~G) = Low (where Q is temporarily treated as if it were not a premise, so that its value may change when the goal is negated).</Paragraph>
      <Paragraph position="1"> Inference to the best explanation - The assertion of the goal G supports a proposition Q which is firmly believed (i.e., P(Q) = High, where Q is a premise or inferred from premises), but which would be unexplained (improbable) without supposing the truth of the goal. Hence, whereas P(Q\[G) = High, in the absence of information about G, the belief in Q is low (where Q is temporarily treated as if it were not a premise).</Paragraph>
      <Paragraph position="2"> Reasoning by cases (exclusive) - A proposition Q satisfies one of the following conditions: (1) it has-an indeterminate level of belief in both the normative and user models (i.e., its probability is within an interval \[0:5=t:O\]); or (2)it has highly  sequent discussion assume a positive bias, i.e., the proposition under consideration is believed: for a negative bias some expressions will be altered accordingly.</Paragraph>
      <Paragraph position="3"> divergent levels of belief in the user model and the normative model.,: For either condition, the belief in the goal must be high both when a high level of belief is ascribed to Q and when a low level of belief is ascribed.</Paragraph>
      <Paragraph position="4"> Reasoning by cases (non-exclusive) - There exists a set of propositions {Q1,..-,Q,~}, each of which leads to a strong belief in the goal (i.e., P(GIQi ) = High for i = 1,...,n), and the disjunction of these propositions is strongly believed (i.e., P(Vi Qi) :--High) &amp;quot;6 ......... Premise to goal- This is the default strategy and requires only that given the current beliefs in the premises, the belief in the goal will be in the target range in both the normative and user BNs.</Paragraph>
      <Paragraph position="5"> Since the conditions for the reasoning by cases strategies consider nodes in the Argument Graph separately, they do not guarantee that all opportunities to argue by cases will be found. For instance, two particular nodes may not satisfy the conditions for the exclusive strategy when considered separately (because when a node is ascribed a high or low level of belief, the prior beliefs of the other nodes are used for Bayesian propagation). However, when considered jointly, the four permutations of extreme beliefs in these nodes, viz high-high, high-low, low-high and low-low, may satisfy the applicability conditions of the exclusive strategy. At present, these opportunities are missed by NAG. However, this may be an appropriate outcome, since such complex arguments by cases are quite rare.</Paragraph>
    </Section>
    <Section position="3" start_page="57" end_page="58" type="sub_section">
      <SectionTitle>
4.3 Proposing specific arguments for each
</SectionTitle>
      <Paragraph position="0"> strategy In this step, the Strategist considers the propositions or sets of propositions that satisfy the conditions for each applicable argumentation strategy, and generates a specific argument based on each of these propositions (or sets of propositions). This is done as follows for each argumentation strategy.</Paragraph>
      <Paragraph position="1"> Reductio ad absurdum and Inference to the best explanation. For each proposition Q which satisfies the conditions for reductio ad absurdum, the Strategist extracts from the Argument Graph the * subgraph whicbcorresponds to the line of reasoning going from the goal node (which was ascribed a low level of belief) to Q (which has been contradicted 6This situation may be generalized so that any Qi consists of a subset of propositions which lead to the goal. However, in the current implementation, each Qi consists of one proposition only. Further, owing to practicality considerations, at present NAG implements a limited version of the applicability conditions for the non-exclusive strategy whereby only pairs of nodes that are relatively close to tile goal and to observable nodes are inspected. This last requirement is necessary in order to determine which combinations of beliefs are possible for the inspected pairs of nodes.</Paragraph>
      <Paragraph position="2">  as a result of this line of reasoning). Each line of When choosing its final argument, the Stratereasoning is obtained by treating themegation of;the: .: ~. gis.t considers;only~=ice arguments, i.e., those that goal as a premise and ~Q as a goal.</Paragraph>
      <Paragraph position="3"> A similar process is applied for the inference to the best explanation strategy, but the goal is ascribed a high level of belief, and Q is expected to achieve a high level of belief as a result of the argument. In general, when using the reductio ad absurdum strategy, people identify only one target proposition to be contradicted when the goal is negated. In contrast, for inference to the best explanation, the goal is often used to explain several propositions. In the current implementation, only one target proposition is being considered for both strategies.</Paragraph>
      <Paragraph position="4"> Reasoning by cases (exclusive). If proposition Q satisfies the conditions for the exclusive strategy, then a copy of the Argument Graph is made for the case where a high belief is ascribed to Q and another copy is made for the case where a low belief is ascribed to Q. Both copies have the same structure, but the propagated values are different. The argument by cases consists of a pair of Argument Graphs, one graph for each case. These graphs do not require further analysis, since the results of propagating these beliefs through the Argument Graph were previously returned by the Analyzer (Section 4.1), and according to the applicability conditions for the exclusive strategy, the argument for each case is sufficiently nice.</Paragraph>
      <Paragraph position="5"> Reasoning by cases (non-exclusive). If a set of nodes {Q1,---, Qn } satisfies the applicability conditions of the non-exclusive strategy, an Argument Graph is generated for each of the n cases by ascribing a high level of belief to each Qi in turn (the rest of the nodes retain their existing degrees of belief). If the Analyzer reports that the argument corresponding to each graph is sufficiently nice, an argument by cases is constructed by listing each graph in turn.</Paragraph>
      <Paragraph position="6"> Premise to goal. Finally, the Strategist considers a premise to goal argument by inspecting the belief in the goal in both the normative and user models after propagation from the premises (this belief was computed by the Analyzer). If the argument is nice enough, then it is retained as a possible candidate.</Paragraph>
      <Paragraph position="7"> If upon completion of this process, none of these argumentation strategies has yielded a nice enough argument, the reasoning context is updated with nodes that were connected to the goal or became salient during the current cycle. The Strategist then re-invokes the spreading activation process, and re-activates the Generator to expand the Argument Graph (Section 3). After expansion, the analysis and strategy proposal processes are repeated. If one or more candidate argunmnts were generated, the Strategist selects a concise argument as described in the next section.</Paragraph>
      <Paragraph position="8"> achieve a degree of belief in the goal which lies inside the target range in both the user model and the normative model. However, we do not have a direct means for determining the belief in the goal in the user model as a result of a hypothetical argument or an argument by cases. This is because the rhetorical force of these strategies affects the user's beliefs in a manner that deviates from the effect modeled by means of Bayesian propagation, as illustrated by the sample arguments in Section 5. The problem of incorporating a model of the rhetorical force of an argument into a Bayesian propagation scheme is yet to be addressed. Nonetheless, in order to test the operation of our mechanism, we currently approximate the effect of an argument (regardless of its strategy) on the user's beliefs by performing Bayesian propagation in the user model BN. In the future, as a first step in modeling rhetorical factors, we intend to investigate how the beliefs in our user models deviate from users' actual (reported) beliefs.</Paragraph>
    </Section>
    <Section position="4" start_page="58" end_page="59" type="sub_section">
      <SectionTitle>
4.4 Selecting a concise argument
</SectionTitle>
      <Paragraph position="0"> Here the Strategist removes long arguments, so that a final selection is made among (shorter) arguments of similar length. 7 NAG does not simply select the most concise argument, because as shown in Section 6, the choice of strategy has a greater influence on the addressee's beliefs than any (small) remaining differences in argument length.</Paragraph>
      <Paragraph position="1"> The Strategist initially performs coarse pruning on the Argument Graphs that were generated by the premise to goal or reasoning by cases strategies. This coarse-grained pruning examines separately the impact of each individual line of reasoning contributing to the belief in the goal, removing entire lines that are not strictly necessary to achieve a belief in the goal that falls inside the target range (the arguments generated using the reductio ad absurdum and inference to the best explanation strategies are not coarsely pruned, since those arguments already comprise a single line of reasoning). Sometimes, the impact of certain lines of reasoning cannot, be assessed in isolation, since two or more lines may contribute jointly towards the belief in a proposition in a mutually dependent manner. Often however, some of the contributing lines of reasoning are independent or nearly so, and coarse pruning can proceed. Next, the Strategist drops from consideration the arguments that are significa~ntly longer than the shortest argument (where length is measured in number of nodes),8 and selects one of the remaining 7Other factors, such as the structural complexity of the arguments, will be considered in the future.</Paragraph>
      <Paragraph position="2"> SAlthough an Argument Graph is further pruned before presenting its corresponding argument to the user (Section 3), it is reasonable to consider lhe length of each candidate graph  arguments according to the following order of preference: reasoning by cases, premise to goal, inference to the best explanation and reductio ad absurdum.</Paragraph>
      <Paragraph position="3"> This ordering is consistent with the results of our evaluation (Section 6).</Paragraph>
    </Section>
  </Section>
  <Section position="6" start_page="59" end_page="59" type="metho">
    <SectionTitle>
5 Example - The Problem of Evil
</SectionTitle>
    <Paragraph position="0"> We now illustrate our argumentation mechanism with &amp;quot;The Problem of Evil&amp;quot;. Given a preamble that establishes that there is evil in the world, and the goal to prove that there is no God, NAG obtains the Argument Graph in Figure 3 after one focusing-generation cycle, and produces the Argument Graphs corresponding to the arguments in Figure 4 (the adverbs that indicate level of belief and the conjunctive expressions are italicized in the arguments for ease of comparison).9 These arguments are based on a definition of God that requires God to be both omnipotent and benevolent.</Paragraph>
    <Paragraph position="1"> Premise to goal. Bayesian propagation of the belief in node 6 results in the denial of the combination of nodes 4 and 5, but yields a moderate probability for each of these nodes and for their respective parents, node 2 and node 3. Still, the probability of node 1 is quite low (i.e., there is a high belief in its negation).</Paragraph>
    <Paragraph position="2"> Reductio ad absurdum. The conditions for reductio ad absurdum are also met by this Argument Graph. That is, the negation of the goal undermines the belief in the premise (the existence of evil).</Paragraph>
    <Paragraph position="3"> Reasoning by cases (exclusive). The conditions for exclusive reasoning by cases are met by both node 4 and node 5, since they obtain middling degrees of belief duringpropagation. We 'illus- ' trate here only the argument which hinges on node 4 (the argument which hinges on node 5 is symmetrical). The two cases in the generated argument are: at this stage, because it is indicative of the length of the argument obtained after finer pruning.</Paragraph>
    <Paragraph position="4">  erated from NAG's output.</Paragraph>
    <Paragraph position="5"> node 4 is true or node 4 is false. The case which ......... assumes.:the.negation.ofr~ode 4 leads:to a.straight~ forward argument that achieves the goal. The case which asserts node 4 achieves the goal through an explain away relationship which involves nodes 4, 5 and 6 (Pearl, 1988). This relationship requires that * P(61-~4) &gt; P(6) and P(61-~5) &gt; P(6), which means that the negation of nodes 4 and 5 are potential explanations for node 6, and that P(416 &amp; 5) &lt; P(416) and P(516 &amp; 4) &lt; P(516), which means that given node 6, node 5 explains away node 4 and vice versa (Zukerman et al.; &amp;quot;'1999)':&amp;quot; -'Tl~at is~ ~ ~'sei~ir/g':'the &amp;quot; proposition in node 5 in light of node 6 greatly weakens the belief in node 4.</Paragraph>
    <Paragraph position="6"> Reasoning by cases (non-exclusive). The Strategist identifies nodes 4 and 5 as possible sources for a non-exclusive argument by cases, since the negation of each of these nodes leads to a strong belief in the goal, and P(-~4 V-~5) is high (because of their relation to node 6). The cases in the generated argument are: node 4 is false or node 5 is false.</Paragraph>
    <Paragraph position="7"> Since all these arguments are nice, the Strategist retains all of them for further processing. As stated in Section 4.4, the arguments that are substantially longer than the shortest argument (in number of nodes) are dropped from consideration. In our example, the premise to goal argument is the shortest, as it threads a path through the 6 nodes in the Argument Graph; the exclusive reasoning by cases argument is the longest, requiring 9 nodes (3 for the case where node 4 is false, 5 for the case where node 4 is true, and 1 for stating the conclusion); the non-exclusive reasoning by cases argument requires 8 nodes (3 for each case, 1 for node 6, which introduces the cases, and 1 for the conclusion); and the reductio ad absurdum argument requires 7 nodes (the 6 nodes in the Argument Graph plus 1 node for stating the conclusion). The exclusive reasoning by cases argument is dropped from consideration since it is 1.5 times longer than the shortest argument, and the non-exclusive argument is select.ed among the remaining arguments by applying our preference ordering.</Paragraph>
  </Section>
  <Section position="7" start_page="59" end_page="60" type="metho">
    <SectionTitle>
6 Preliminary Evaluation
</SectionTitle>
    <Paragraph position="0"> In order to determine the users' preferences Yor different argumentation strategies, we performed a preliminary evaluation where we presented two sets of arguments to subjects. One set contained the premise to goal and reasoning by cases arguments for the problem of evil shown in Figure 4. The second set contained a preamble which presented some background information, and a premise to goal, a reductio ad absurdum and an inference to the best explanation argument for a large asteroid striking Earth 65 million years BC (Figure 5). The argu- null Premise to goal: I &amp;quot;Evil in the world (6) implies.that God may not want to prevent evil (-~4) and that GQd:maynot ........ I be able to preveiat- egil (-45). God possibly not wantingto prevent evil (-~)t) im~liesthdl~ God C/ndb, not be benevolent (-~2). God possibly not being able to prevent evil (--,5) implies that God may not be omnipotent (-~3). The fact that God may not be benevolent (-~2) and ,that God may not be omnipotent (-~3) implies that it is very likely that God does not exist (-,1).' Reductio ad absurdum: &amp;quot;Assume that God exists (1). This implies that God is benevolent (2) and that God is omnipotent (3). God being benevolent (2) implies that God wants to prevent evil (4). God being omnipotent (3) implies that God can prevent evil (5). The fact that Godwants to prevent evil (4) and that God can prevent evil (5) implies that there is no evil in the world (-~6). But there is evil in the world (6): ,Therefore~.Go:d:doesmot exist.&amp;quot; Reasoning by cases (exclusive): &amp;quot;Consider the following cases: God wants to prevent evil (4), and God does not want to prevent evil (--14).</Paragraph>
    <Paragraph position="1"> God wants to prevent evil (4). This, together with the existence of evil (6) implies that God is not able to prevent evil (~5), which in turn implies that God is not omnipotent (-~3). This implies that God does not exist (~1).</Paragraph>
    <Paragraph position="2"> God does not want to prevent evil (~4). This implies that God is not benevolent (-~2), which in turn implies that God does not exist (~1).</Paragraph>
    <Paragraph position="3"> Either way, God does not exist (-~1).&amp;quot; Reasoning by cases (non-exclusive}: &amp;quot;Since there is evil in the world (6), God does not want to prevent evil (-~4) or God cannot prevent evil (-~5).</Paragraph>
    <Paragraph position="4"> God does not want to prevent evil (-,4). This implies that God is not benevolent (~2), which in turn implies that God does not exist (~1).</Paragraph>
    <Paragraph position="5"> God cannot prevent evil (-~5). This implies that God is not omnipotent (-~3), which in turn implies that God does not exist (-11).</Paragraph>
    <Paragraph position="6"> Either way, God does not exist (--1).&amp;quot;  ments in each set were presented in two different orders. 40 subjects read the 'problem of evil' arguments, and 35 the 'asteroid' arguments. In the former set, the distribution of preferences was uniform among the three strategies. In the latter set, premise to goal was preferred, followed by inference to the best explanation and then reductio ad absurdum (these results, which were not affected by the order of presentation, were supported by X 2 tests which were significant at the 0.01 level).</Paragraph>
    <Paragraph position="7"> At first glance it appears that premise to goal is the preferred argmnentation strategy. However, the participants' comments indicate that further experiments are required to determine the conditions under which different argumentation strategies are appropriate. For exarnple,several participants in: dicated that reductio ad absurdum arguments are appropriate when the ensuing contradiction is compelling, which tile3&amp;quot; did not find to be the case in the asteroid example. Further. they stated that they liked the premise to goal argument because it contained inorc information than the other argunmnts (which have one line of reasoning only). However.</Paragraph>
    <Paragraph position="8"> for the Problem of Evil this additional information may be less appealing for arguments that are longer than one paragraph.</Paragraph>
  </Section>
  <Section position="8" start_page="60" end_page="60" type="metho">
    <SectionTitle>
7 Conclusion
</SectionTitle>
    <Paragraph position="0"> We have offered an operational definition of the conditions for pursuing three types of argumentation strategies: hypothetical, reasoning by cases and premise to goal. We have also presented a mechanism that proposes applicable argumentation strategies based on these conditions, and generates specific arguments based on these strategies. This mechanism has been implemented in a Bayesian argument-generation system. Our evaluation also brings to notice tile need to investigate additional aspects of argumentation strategies.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML