File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/95/p95-1019_metho.xml

Size: 29,439 bytes

Last Modified: 2025-10-06 14:14:04

<?xml version="1.0" standalone="yes"?>
<Paper uid="P95-1019">
  <Title>Response Generation in Collaborative Negotiation*</Title>
  <Section position="4" start_page="136" end_page="136" type="metho">
    <SectionTitle>
3 Features of Collaborative Negotiation
</SectionTitle>
    <Paragraph position="0"> Collaborative negoti~ion occurs when conflicts arise among agents developing a shared plan 1 during collaborative planning. A collaborative agent is driven by the goal of developing a plan that best satisfies the interests of all the agents as a group, instead of one that maximizes his own interest. This results in several distinctive features of collaborative negotiation: 1) A collaborative agent does not insist on winning an argument, and may change his beliefs ff another agent presents convincing justification for an opposing belief. This differentiates collaborative negotiation from argumentation (Birnbaum et al., 1980; Reichman, 1981; Cohen, 1987; Quilici, 1992). 2) Agents involved in collaborative negotiation are open and honest with one another; they will not deliberately present false information to other agents, present information in such a way as to mislead the other agents, or strategically hold back information from other agents for later use. This distinguishes collaborative negotiation from non-collaborative negotiation such as labor negotiation (Sycara, 1989). 3) Collaborative agents are interested in 1The notion of shared plan has been used in (Grosz and Sidner, 1990; Allen, 1991).</Paragraph>
    <Paragraph position="1"> others' beliefs in order to decide whether to revise their own beliefs so as to come to agreement (Chu-Carroll and Carberry, 1995). Although agents involvedin argumentation and non-collaborative negotiation take other agents' beliefs into consideration, they do so mainly to find weak points in their opponents' beliefs and attack them to win the argument.</Paragraph>
    <Paragraph position="2"> In our earlier work, we built on Sidner's proposal/acceptance and proposal/rejection sequences (Sitnet, 1994) and developed a model thaC/ captures collaborative planning processes in a Propose-Evaluate-Modify cycle of actions (Chu-Carroll and Carberry, 1994). This model views coll~tive planning as agent A proposing a set of actions and beliefs to be i~ted into the plan being developed, agent B evaluating the proposal to determine whether or not he accepts the proposal and, ff not, agent B proposing a set of modifications to A's original proposal. The proposed modifications will again be evaluated by A, and if conflicts arise, she may propose modifications to B's previously proposed modifications, resulting in a recursive process. However, our research did not specify, in cases where multiple conflicts arise, how an agent should identify which pm of an unaccept~ proposal to address or how to select evidence to support the proposed modification. This paper extends that work by i~ting into the modification process a slrategy to determine the aspect of the proposal that the agent will address in her pursuit of conflict resolution, as well as a means of selecting appropriate evidence to justify the need for such modification.</Paragraph>
  </Section>
  <Section position="5" start_page="136" end_page="141" type="metho">
    <SectionTitle>
4 Response Generation in Collaborative
Negotiation
</SectionTitle>
    <Paragraph position="0"> In order to capture the agents' intentions conveyed by their utterances, our model of collaborative negotiation utilizes an enhanced version of the dialogue model described in (Lambert and Carberry, 1991) to represent the current status of the interaction. The enhanced dialogue model has four levels: the domain level which consists of the domain plan being constructed for the user's later execution, the problem-solving level which contains the actions being performed to construct the don~n plan, the belief level which consists of the mutual beliefs pursued during the planning process in order to further the problem-solving intentions, and the discourse level which contains the communicative actions initiated to achieve the mutual beliefs (Chu-Carroll and Carberry, 1994). This paper focuses on the evaluation and modification of proposed beliefs, and details a strategy for engaging in collaborative negotiations.</Paragraph>
    <Section position="1" start_page="137" end_page="138" type="sub_section">
      <SectionTitle>
4.1 Evaluating Proposed Beliefs
</SectionTitle>
      <Paragraph position="0"> Our system maintains a set of beliefs about the domain and about the user's beliefs. Associated with each belief is a strength that represents the agent's confidence in holding that belief. We model the strength of a belief using endorsements, which are explicit records of factors that affect one's certainty in a hypothesis (Cohen, 1985), following (Galliers, 1992; Logan et al., 1994). Our endorsements are based on the semantics of the utterance used to convey a befief, the level of expertise of the agent conveying the belief, stereotypical knowledge, etc.</Paragraph>
      <Paragraph position="1"> The belief level of the dialogue model consists of mutual beliefs proposed by the agents' discourse actions.</Paragraph>
      <Paragraph position="2"> When an agent proposes a new belief and gives (optional) supporting evidence for it, this set of proposed beliefs is represented as a belief tree, where the belief represented by a child node is intended to support that represented by its parent. The root nodes of these belief trees (rap-level beliefs) contribute to problem-solving actions and thus affect the domain plan being developed. Given a set of newly proposed beliefs, the system must decide whether to accept the proposal or m initiate a negotiation dialogue to resolve conflicts. The evaluation of proposed beliefs starts at the leaf nodes of the proposed belief trees since acceptance of a piece of proposed evidence may affect acceptance of the parent belief it is intended to support. The process continues until the top-level proposed beliefs are evaluated. Conflict resolution strategies are invoked only if the top-level proposed beliefs are not accepted because if collaborative agents agree on a belief relevant to the domain plan being constructed, it is irrelevant whether they agree on the evidence for that belief (Young et al., 1994).</Paragraph>
      <Paragraph position="3"> In determining whether to accept a proposed befief or evidential relationship, the evaluator first constructs an evidence set containing the system's evidence thin supports or attacks _bcl and the evidence accepted by the system that was proposed by the user as support for -bel. Each piece of evidence contains a belief _beli, and an evidential relationship supports(.beli,-bel). Following Walker's weakest link assumption (Walker, 1992) the strength of the evidence is the weaker of the strength of the belief and the strength of the evidential relationship.</Paragraph>
      <Paragraph position="4"> The evaluator then employs a simplified version of Galliers' belief revision mechanism 2 (Galliers, 1992; Logan et al., 1994) to compare the strengths of the evidence that supports and attacks _bel. If the strength of one set of evidence strongly outweighs that of the other, the decision to accept or reject.bel is easily made. However, if the difference in their strengths does not exceed a pre-determined 2For details on how our model determines the acceptance of a belief using the ranking of endorsements proposed by GaUiers, see (Chu-Carroll, 1995).</Paragraph>
      <Paragraph position="6"> ,. ......................................................................... J Dr. Smith is not teaching AL Dr. Smith is going on sablmutical next year.</Paragraph>
      <Paragraph position="7">  threshold, the evaluator has insufficient information to determine whether to adopt _bel and therefore will initiate an information-sharing subdialogue (Cho-Carmll and Carberry, 1995) to share information with the user so that each of them can knowiedgably re-evaluate the user's original proposal. If, during infommtion-sharing, the user provides convincing support for a belief whose negation is held by the system, the system may adopt the belief after the re-evaluation process, thus resolving the conflict without negotiation.</Paragraph>
      <Paragraph position="8">  To illustrate the evaluation of proposed beliefs, consider the following uttermmes:  (1) S: 1 think Dr. Smith is teaching AI next semester.</Paragraph>
      <Paragraph position="9"> (2) U: Dr. Smith is not teaching AL (3) He is going on sabbatical next year.</Paragraph>
      <Paragraph position="10">  Figure 1 shows the belief and discourse levels of the dialogue model that captures utterances (2) and (3). The belief evaluation process will start with the belief at the leaf node of the proposed belief txee, On.Sabbatical(Smith, next year)). The system will first gather its evidence pe~aining to the belief, which includes I) a warranted belief ~ that Dr. Smith has postponed his sabbatical until 1997 (Postponed-Sabbatical(Smith, J997)), 2) a warranted belief that Dr. Smith postponing his sabbatical until 1997 supports the belief that he is not going on sabbatical next year (supports(Postponed-Sabbatical(Smith,1997), -~On-SabbaticaI(Smith, next year)), 3) a strong belief that Dr. Smith will not be a visitor at IBM next year (-~visitor(Smith, IBM, next year)), and 4) a warranted belief that Dr. Smith not being a visitor at IBM next aThe strength of a belief is classified as: warranted, strong, or weak, based on the endorsement of the belief.</Paragraph>
      <Paragraph position="11">  year supports the belief that he is not going on sabbatical next year (supports(-~visitor(Smith, IBM, next year), -,On-Sabbatical(Smith, next year)), perhaps because Dr. Smith has expressed his desire to spend his sabbatical only at IBM). The belief revision mechanism will then be invoked to determine the system's belief about On-Sabbatical(Smith, next year) based on the system's own evidence and the user's statement. Since beliefs (1) and (2) above constitute a warranted piece of evidence against the proposed belief and beliefs (3) and (4) constitute a strong piece of evidence against it, the system will not accept On-Sabbatical(Smith, next year).</Paragraph>
      <Paragraph position="12"> The system believes that being on sabbatical implies a faculty member is not teaching any courses; thus the proposed evidential relationship will be accepted. However, the system will not accept the top-level proposed belief, -,Teaches(Smith, A/), since the system has a prior belief to the contrary (as expressed in utterance ( 1 )) and the only evidence provided by the user was an implication whose antecedent was not accepted.</Paragraph>
    </Section>
    <Section position="2" start_page="138" end_page="141" type="sub_section">
      <SectionTitle>
4.2 Modifying Unaccepted Proposals
</SectionTitle>
      <Paragraph position="0"> The collaborative planning principle in (Whittak~ and Stenton, 1988; Walker, 1992) suggests that &amp;quot;conversants must provide evidence of a detected discrepancy in belief as soon as possible.&amp;quot; Thus, once an agent detects a relevant conflict, she must notify the other agent of the conflict and initiate a negotiation subdialogne to resolve it-to do otherwise is to fail in her responsibility as a collaborative agent. We capture the attempt to resolve a conflict with the problem-solving action Modify-Proposal, whose goal is to modify the proposal to a form that will potentially be accepted by both agents. When applied to belief modification, Modify-Proposal has two specializations: Correct-Node, for when a proposed belief is not accepted, and Correct-Relation, for when a proposed evidential relationship is not accepted. Figure 2 shows the problem-solving recipes 4 for Correct-Node and its subaction, Modify-Node, that is responsible for the actual modification of the proposal. The applicability conditions 5 of Correct-Node specify that the action can only be invoked when _sl believes that _node is not acceptable while _s2 believes that it is (when _sl and _s2 disagree about the proposed belief represented by ..node). However, since this is a collaborative interaction, the actual modification can only be performed when both ..sl and _s2 believe that _node is not acceptable w that is, the conflict between _sl and .s2 must have been resolved. This is captured by 4A recipe (Pollack, 1986) is a template for performing actions. It contains the applicabifity conditions for performing an action, the subactions comprising the body of an action, etc.</Paragraph>
      <Paragraph position="1"> SApplicabflity conditions are conditions that must already be satisfied in order for an action to be reasonable to pursue, whereas an agent can try to achieve unsatisfied preconditions.</Paragraph>
      <Paragraph position="2">  the applicability condition and precondition of Mod/fy-Node. ~ attempt to satisfy the precondition causes the system to post as a mutual belief to be achieved the belief that ..node is not acceptable, leading the system to adopt discourse actions to change _s2's beliefs, thus initiating a collaborative negotiation subdialogne, e 4.2,1 Selecting the Focus of Modification When multiple conflicts arise between the system and the user regarding the user's proposal, the system must identify the aspect of the proposal on which it should focus in its pursuit of conflict resolution. For example, in the case where Correct-Node is selected as the specialization of Modify-Proposal, the system must determine how the parameter node in Correct-Node should be instantiated. The goal of the modification process is to resolve the agents' conflicts regarding the unaccepted top-level proposed beliefs. For each such belief, the system could provide evidence against the befief itself, address the unaccepted evidence proposed by the user to eliminate the user's justification for the belief, or both. Since collaborative agents are expected to engage in effective and efficient dialogues, the system should address the unaccepted belief that it predicts will most quickly resolve the top-level conflict. Therefore, for each unaccepted top-level belief, our process for selecting the focus of modificatkm involves two steps: identifying a candidate foci tree from the proposed belief tree, and selecting a eThis subdialogue is considered an interrupt by Whittaker, Stenton, and Walker (Whittaker and Stenton, 1988; Walker and Whittaker, 1990), initiated to negotiate the truth of a piece of information. However, the utterances they classify as interrupts include not only our negotiation subdialogues, generated for the purpose of modifying a proposal, but also clarification subdialogues, and information-sharing subdialogues (Chu-Carroll and Carberry, 1995), which we contend should be part of the evaluation process.</Paragraph>
      <Paragraph position="3">  focus from the candidate foci tree using the heuristic &amp;quot;attack the belief(s) that will most likely resolve the conflict about the top-level belief.&amp;quot; A candidate loci tree contains the pieces of evidence in a proposed belief tree which, if disbelieved by the user, might change the user's view of the unaccepted top-level proposed belief (the root node of that belief tree). It is identified by performing a depth-first search on the proposed belief tree. When a node is visited, both the belief and the evidential relationship between it and its parent are examined. If both the belief and relationship were accepted by the evaluator, the search on the current branch will terminate, since once the system accepts a belief, it is irrelevant whether it accepts the user's support for that belief (Young et al., 1994).</Paragraph>
      <Paragraph position="4"> Otherwise, this piece of evidence will be included in the candidate loci tree and the system will continue to search through the evidence in the belief tree proposed as support for the unaccepted belief and/or evidential relationship.</Paragraph>
      <Paragraph position="5"> Once a candidate foci tree is identified, the system should select the focus of modification based on the likelihood of each choice changing the user's belief about the top-level belief. Figure 3 shows our algorithm for this selection process. Given an unaccept~ belief (.bel) and the beliefs proposed to support it, Select-Focus.</Paragraph>
      <Paragraph position="6"> Modification will annotate_bel with 1) its focus of modification (.bel.focus), which contains a set of beliefs (.bel and/or its descendents) which, if disbelieved by the user, are predicted to cause him to disbelieve _bel, and 2) the system's evidence against_bel itself (_hel.s-attack).</Paragraph>
      <Paragraph position="7"> Select-Focus-Modification determines whether to attack _bel's supporting evidence separately, thereby eliminating the user's reasons for holding ..bC/l, to atta~ ..bel itself, or both. However, in evainating the effectiveness of attacking the proposed evidence for.bel, the system must determine whether or not it is possible to successfully refute a piece of evidence (i.e., whether or not the system believes that sufficient evidence is available to convince the user that a piece of proposed evidence is invalid), and if so, whether it is mote effective to attack the evidence itself or its support. Thus the algorithm recursively applies itself to the evidence proposed as support for _bel which was not accepted by the system (step 3). In this recursive process, the algorithm annotates each unaccepted belief or evidential relationship proposed to support _bel with its focus of modification (-beli.focus) and the system's evidence against it (_beli.s-attack). _bell.focus contains the beliefs selected to be addressed in order to change the user's belief about ..beli, and its value will be nil if the system predicts that insufficient evidence is available to change the user's belief about -bell.</Paragraph>
      <Paragraph position="8"> Based on the information obtained in step 3, Select.</Paragraph>
      <Paragraph position="9"> Focus-Modification decides whether to attack the evidence proposed to support _bel, or _bel itself (step 4).</Paragraph>
      <Paragraph position="10"> Its preference is to address the unaccepted evidence, be- null Select .Focus-Modlflcatlon(_bel): 1. _bel.u-evid +-- system's beliefs about the user's evidence pertaining to _bel _bel.s-attack 4- system's own evidence against _bel 2. If _bel is a leaf node in the candidate foci tree, 2.1 If Predict(_bel, _bel.u-evid + _bel.s-attack) = -~_bel then _bel.focus ,-- .bel; return 2.2 Else .bel.focus t- nil; return 3. Select focus for each of .bel's children in the candidate foci tree, .belx ..... ..bel,~: 3.1 If supports(_beli,_bel) is accepted but .beli is not, Select-Focus-Modlficatioa(.bel~ ).</Paragraph>
      <Paragraph position="11"> 3.2 Else if .beli is accepted but supports(_beli,.bel) is not, Sdect-Focus-Modlficatlon(.beli,.bel).</Paragraph>
      <Paragraph position="12"> 3.3 Else Select-Focu-Modificatioa(.bel~) and Select-Focus-Modification( supports(_beli ,.bel)) 4. Choose between attacking the Woposed evidence for .bel and attacking ..bel itself:  cause McKeown's focusing rules suggest that continuing a newly introduced topic (about which there is more to be said) is preferable to returning to a previous topic OVIcKcown, 1985). Thus the algorithm first considers whether or not attacking the user's support for ..bel is sufficient to convince him of--,-bel (step 4.2). It does so by gathering (in cand-set) evidence proposed by the user as direct support for _bel but which was not accepted by the system and which the system predicts it can successfully refute (i.e., =beli.focus is not nil). The algorithm then hypothesizes that the user has changed his mind about each belief in cand-set and predicts how this will affect the user's belief about .bel (step 4.2). If the user is predicted to accept --,..bel under this hypothesis, the algorithm invokes Select-Min-Set to select a minimum subset of cand-set as the unaccepted beliefs that it would actually pursue, and the focus of modification (..bel.focus) will be the union of  the focus for each of the beliefs in this minimum subset.</Paragraph>
      <Paragraph position="13"> If attacking the evidence for _bel does not appear to be sufficient to convince the user of -~_bel, the algorithm checks whether directly attacking _bel will accomplish this goal. If providing evidence directly against _bel is predicted to be successful, then the focus of modification is _bcl itself (step 4.3). If directly attacking _bel is also predicted to fail, the algorithm considers the effect of attacking both ..bel and its unaccepted proposed evidence by combining the previous two prediction processes (step 4.4). If the combined evidence is still predicted to fail, the system does not have sufficient evidence to change the user's view of_bel; thus, the focus of modification for .bel is nil (step 4.5). 7 Notice that steps 2 and 4 of the algorithm invoke a function, Predict, that makes use of the belief revision mechanism (Galliers, 1992) discussed in Section 4.1 to predict the user's acceptance or unacceptance of..bel based on the system's knowledge of the user's beliefs and the evidence that could be presented to him (Logan et al., 1994). The result of Select-Focus-Modification is a set of user beliefs (in _bel.focus) that need to be modified in order to change the user's belief about the unaccepted top-level belief. Thus, the negations of these beliefs will be posted by the system as mutual beliefs to be achieved in order to perform the Mod/fy actions.</Paragraph>
      <Paragraph position="14">  Studies in communication and social psychology have shown that evidence improves the persuasiveness of a message (Luchok and McCroskey, 1978; Reynolds and Burgoon, 1983; Petty and Cacioppo, 1984; Hampie, 1985). Research on the quantity of evidence indicates that there is no optimal amount of evidence, but that the use of high-quality evidence is consistent with persuasive effects (Reinard, 1988). On the other hand, Cn'ice's maxim of quantity (Grice, 1975) specifies that one should not contribute more information than is required, s Thus, it is important that a collaborative agent selects suffmient and effective, but not excessive, evidence to justify an intended mutual belief.</Paragraph>
      <Paragraph position="15"> To convince the user ofa belief,_bel, our system selects appropriate justification by identifying beliefs that could 7In collaborative dialogues, an agent should reject a proposal only ff she has strong evidence against it. When an agent does not have sufficient information to determine the acceptance of a proposal, she should initiate an information-sharing subdialogue to share information with the other agent and re-evaluate the proposal (Chu-Carroll and Carberry, 1995). Thus, further research is needed to determine whether or not the focus of modification for a rejected belief will ever be nil in collaborative dialogues.</Paragraph>
      <Paragraph position="16"> sWalker (1994) has shown the importance of IRU's Odormationally Redundant Utterances) in efficient discourse. We leave including appropriate IRU's for future work.</Paragraph>
      <Paragraph position="17"> be used to support_bel and applying filtering heuristics to them. The system must first determine wbether justification for_bel is needed by predicting whether or not merely informing the user of _bel will be sufficient to convince him of _bel. If so, no justification will be presented. If justification is predicted to be necessary, the system will first construct the justification chains that could be used to support _bel. For each piece of evidence t~t could be used to directly support ..bel, the system first predicts whether the user will accept the evidence without justification. If the user is predicted not to accept a piece of evidence (evidi), the system will augment the evidence to be presented to the user by posting evidi as a mutual belief to be achieved, and selecting propositions that could serve as justification for it. This results in a recursive process that returns a chain of belief justifications that could be used to support.bel.</Paragraph>
      <Paragraph position="18"> Once a set of beliefs forming justification chains is identified, the system must then select from this set those belief chains which, when presented to the user, are predicted to convince the user of .bel. Our system will first construct a singleton set for each such justification chain and select the sets containing justification which, when presented, is predicted to convince the user of _bel. If no single justification chain is predicted to be sufficient to change the nser's beliefs, new sets will be constructed by combining the single justification chains, and the selection ~ is repeated. This will produce a set of possible candidate justification chains, and three heuristics will then be applied to select from among them. The first heuristic prefers evidence in which the system is most confident since high-quality evidence produces more attitude change than any other evidence form (Luchok and McCroskey, 1978). Furthermore, the system can better justify a belief in which it has high confidence should the user not accept it. The second heuristic prefers evidence that is novel to the user, since studies have shown that evidence is most persuasive ff it is previously unknown to the hearer (Wyer, 1970; Morley, 1987). The third heuristic is based on C.nice's maxim of quantity and prefers justification chains that contain the fewest beliefs.</Paragraph>
      <Paragraph position="19">  After the evaluation of the di~ogue model in Figure 1, Modify-Proposal is invoked because the top-level proposed belief is not accepted. In selecting the focus of modification, the system will first identify the candidate foci tree and then invoke the Select-Focus-Modification algorithm on the belief at the root node of the candidate foci tree. The candidate foci tree will be identical to the proposed belief tree in Figure 1 since both the top-level proposed belief and its proposed evidence were rejected during the evaluation process. This indicates that the focus of modification could be either -~Teaches(Smith,AI)  or On-Sabbatical(Smith, next year) (since the evidential relationship between them was accepted). When Select-Focus-Modification is applied to --,Teaches(Smith,Al), the algorithm will first be recursively invoked on On-Sabbatical(Smith, next year) to determine the focus for modifying the child belief (step 3.1 in Figure 3). Since the system has two pieces of evidence against On-Sabbatical(Smith, next year), 1) a warranted piece of evidence containing Postponed-Sabbatical(Smittg1997) and supports( Postponed-Sabbatical(Smith,1997),-,On-Sabbatical(Smith, next year)), and 2) a strong piece of evidence containing --,visitor(Smith, IBM, next year) and supports(-,visitor(Smith, IBM, next year),-,On-Sabbatical(Smith, next year)), the evidence is predicted to be sufficient to change the user's belief in On-Sabbatical(Smith, next year), and hence -,Teaches(Smith, A1); thus, the focus of modification will be On-Sabbatical(Smith, next year). The Correct-Node specialization of Modify-Proposal will be invoked since the focus of modification is a belief, and in order to satisfy the precondition of Modify.Node (Figure 2), MB( S, U, -~ On-Sabbatical(Smith, next year)) will be posted as a mutual belief to be achieved.</Paragraph>
      <Paragraph position="20"> Since the user has a warranted belief in On-Sabbatical(Smith, next year) ('indicated by the semantic form of utterance (3)), the system will predict th~ merely informing the user of the intended mutual belief is not sufficient to change his belief; therefore R will select justificatkm from the two available pieces of evidence supporting -,On.Sabbatical(Smith, next year) presented earlier. The system will predict that either piece of evidence combined with the proposed mutual belief is sufficient to change the user's belief; thus, the filtering heuristics are applied. The first heuristic will cause the system to select Postponed.Sabbatical(Smith, 1997) and supports(Postponed-Sabbatical(Smith, 1997),-,On-Sabbatical(Smith, next year)) as support, since it is the evidence in which the system is more confident.</Paragraph>
      <Paragraph position="21"> The system will try to establish the mutual beliefs 9 as an attempt to satisfy the precondition of Modify-Node.</Paragraph>
      <Paragraph position="22"> This will cause the system to invoke Inform cKscourse actions to generate the following utterances:  (4) S: Dr. Smith is not going on sabbatical next year.</Paragraph>
      <Paragraph position="23"> (5) He postponed his sabbatical until 199Z  If the user accepts the system's utterances, thus satisfying the precondition that the conflict be resolved, Modify-Node can be performed and changes made to the original proposed beliefs. Otherwise, the user may propose mod9Only MB( S, U, Postponed-Sabbatical( Smith, 1997)) will be proposed as justification because the system believes that the evidential relationship needed to complete the inference is held by a stereotypical user.</Paragraph>
      <Paragraph position="24"> ifications to the system's proposed modifications, resulting in an embedded negotiation sub4iaJogue.</Paragraph>
    </Section>
  </Section>
class="xml-element"></Paper>
Download Original XML