File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/98/j98-3002_concl.xml
Size: 19,160 bytes
Last Modified: 2025-10-06 13:58:02
<?xml version="1.0" standalone="yes"?> <Paper uid="J98-3002"> <Title>Collaborative Response Generation in Planning Dialogues</Title> <Section position="8" start_page="391" end_page="397" type="concl"> <SectionTitle> 8. Discussion </SectionTitle> <Paragraph position="0"/> <Section position="1" start_page="391" end_page="393" type="sub_section"> <SectionTitle> 8.1 Generality of the Model </SectionTitle> <Paragraph position="0"> The response generation strategies presented in this paper are independent of the application domain and can be applied to other collaborative planning applications. We will illustrate the generality of our model by showing how, with appropriate domain knowledge, it can generate the turns of dialogues that have been analyzed by other researchers.</Paragraph> <Paragraph position="1"> First, consider the following dialogue segment, where H (a financial advisor) and J (an advice-seeker) are discussing whether J is eligible for an IRA for 1981 (Walker \[1996a\], in tum taken from Harry Gross Transcripts \[1982\]): (27) H: There's no reason why you shouldn't have an IRA for last year (1981).</Paragraph> <Paragraph position="2"> (28) J: Well I thought they just started this year.</Paragraph> <Paragraph position="3"> (29) H: Oh no.</Paragraph> <Paragraph position="4"> (30) IRA's were available as long as you are not a participant in an existing pension.</Paragraph> <Paragraph position="5"> Chu-Carroll and Carberry Response Generation in Planning Dialogues Speaker Belief Strength H: expert HI: J is eligible for an IRA in 1981 strong H2: IRA is available as long as no pension warranted J: advisee Jl: IRA started in 1982 weak J2: J worked for a company with a pension in 1981 warranted Figure 16 Assumed knowledge of dialogue participants in utterances (27) to (33) (31) J: Oh I see.</Paragraph> <Paragraph position="6"> (32) Well I did work I do work for a company that has a pension. (33) H: Ahh. Then you're not eligible for 81.</Paragraph> <Paragraph position="7"> Let us suppose that H's and J's private beliefs are as shown in Figure 16, which we believe to be reasonable assumptions given the roles of the participants and the content and form of the utterances in the dialogue. In utterance (27), H proposes the belief that J should be eligible for an IRA in 1981. J's weak belief that IRA's started in 1982 resulted in J being uncertain about her acceptance of H's proposal in (27); thus J initiates information-sharing using the Invite-Attack strategy and presents belief J1 in utterance (28). H rejects J's proposal from (28) because of his warranted belief H2; this rejection is conveyed in (29) and H provides counterevidence in (30). J accepts H's modification of her proposal in (31), and re-evaluates H's original proposal from utterance (27) taking into account the new information from (30). This leads J to reject H's original proposal by stating her evidence for rejection in (32). 25 In utterance (33), H accepts J's proposal from utterance (32), and both agents come to agreement that J is not eligible for an IRA in 1981.</Paragraph> <Paragraph position="8"> As we noted in Section 3.1, Walker classified utterance (28) as a rejection. We believe that our treatment of utterance (28) as conveying uncertainty and initiating information-sharing better accounts for the overall dialogue. In our model, utterances (28)-(31) constitute an information-sharing subdialogue, with utterances (29)(31) forming an embedded negotiation subdialogue.</Paragraph> <Paragraph position="9"> Next, consider the following dialogue segment between a user and a librarian, from Logan et al. (1994): (34) U: I am looking for books on the architecture of Michelangelo. (35) L: I thought Michelangelo was an artist.</Paragraph> <Paragraph position="10"> (36) U: He was also an architect.</Paragraph> <Paragraph position="11"> (37) He designed St. Peter's in Rome.</Paragraph> <Paragraph position="12"> (38) U: Ok, ...</Paragraph> <Paragraph position="13"> 25 Using CORE's current response generation mechanism, it would have explicitly stated its rejection of the main belief as follows: I am not eligible for an IRA for last year, since I work for a company that has a pension. However, it will be a very minor alteration to CORE's algorithms to allow for exclusive generation of implicit rejection of proposals. On the other hand, to allow for both implicit and explicit rejection of proposals and to select between them during the generation process requires further reasoning, and we leave this for future work.</Paragraph> <Paragraph position="14"> Computational Linguistics Volume 24, Number 3 Here we assume that L has a weak belief that Michelangelo was an artist (L1), and a very strong belief that if a person is an artist, he is not an architect (L2), while U has a very strong belief that Michelangelo is both an artist and an architect (U1). These beliefs are consistent with those expressed in utterances (34)-(38). L initiates information-sharing after U's proposal in (34) because of a weak piece of evidence against it, which consists of beliefs L1 and L2; thus in utterance (35) L invites U to address her counterevidence. U accepts Us proposal that Michelangelo was an artist, but rejects the implicit proposal that Michelangelo being an artist implies that he is not an architect. Thus U initiates collaborative negotiation by presenting a modified belief in (36) and justifying it in (37), which leads to L accepting these proposed beliefs in (38).</Paragraph> </Section> <Section position="2" start_page="393" end_page="393" type="sub_section"> <SectionTitle> 8.2 Contributions </SectionTitle> <Paragraph position="0"> As illustrated by the dialogues in the previous section, our work provides a domain-independent overall framework for modeling collaborative planning dialogues. Instead of treating each proposal as either accepted (and incorporated into the agents' shared plan/beliefs) or rejected (and deleted from the stack of open beliefs), our framework allows a proposal to be under negotiation. Furthermore, this model is recursive in that the Modify action in itself contains a full Propose-Evaluate-Modify cycle, allowing the model to capture situations in which embedded negotiation subdialogues arise in a natural and elegant fashion.</Paragraph> <Paragraph position="1"> Our work also addresses the following two issues: 1) how should the system go about determining whether to accept or reject a proposal made by the user, and what should it do when it remains uncertain about whether to accept, and 2) when a relevant conflict is detected in a proposal from the user, how should the system go about resolving the conflict. Our information-sharing mechanism allows the system to focus on those beliefs that it believes will most effectively resolve its uncertainty about the proposal and to select an appropriate information-sharing strategy. To our knowledge, our model is the only response generation system to date that allows the system to postpone its decision about the acceptance of a proposal and to initiate information-sharing in an attempt to arrive at a decision.</Paragraph> <Paragraph position="2"> In order to address the second issue, we developed a conflict resolution mechanism that allows the system to initiate collaborative negotiation with the user to resolve their disagreement about the proposal. Our conflict resolution mechanism allows the system to focus on those beliefs that it believes will most effectively and efficiently resolve the agents' conflict about the proposal and to select what it believes to be sufficient, but not excessive, evidence to justify its claims. Logan et al. (Logan et al. 1994; Cawsey et al. 1993) developed a dialogue system that is capable of determining whether or not evidence should be included to justify rejection of a single proposed belief. Our system improves upon theirs by providing a means of dealing with situations in which multiple conflicts arise and those in which multiple pieces of evidence are available to justify a claim.</Paragraph> </Section> <Section position="3" start_page="393" end_page="395" type="sub_section"> <SectionTitle> 8.3 Future Work </SectionTitle> <Paragraph position="0"> There are several directions in which our response generation framework must be extended. First, we have focused on identifying information-sharing and conflict resolution strategies for content selection in the response generation process. For text structuring, we used the simple strategy of presenting claims before their justification.</Paragraph> <Paragraph position="1"> However, Cohen analyzed argumentative texts and found variation in the order in which claims and their evidence are presented (Cohen 1987). Furthermore, we do not consider situations in which a piece of evidence may simultaneously provide support Example of a belief playing multiple roles.</Paragraph> <Paragraph position="2"> for two claims. Since text structure can influence coherence and focus, we must investigate appropriate mechanisms for determining the structure of a response containing multiple propositions. In addition, we must identify appropriate syntactic forms for expressing each utterance (such as a surface negative question versus a declarative statement), identify when cue words should be employed, and use a sentence realizer to produce actual English utterances.</Paragraph> <Paragraph position="3"> Our Select-Justification algorithm assumes that all information known to the user can be accessed by the user without difficulty during his interaction with the system; thus it prefers selecting evidence that is novel to the user over selecting evidence already known to the user. However, Walker has argued that, when taking into account resource limitations and processing costs, effective use of IRU's (informationally redundant utterances) can reduce effort during collaborative planning and negotiation (Walker 1996b). It is thus important to investigate how resource limitations and processing costs may affect our process for conflict resolution in terms of both the selection of the belief(s) to address, and the selection of evidence needed to refute the belief(s). In addition, we must investigate when to convey propositions implicitly rather than explicitly, as was the case in utterance (32) of the IRA dialogue in Section 8.1.</Paragraph> <Paragraph position="4"> Two assumptions made in this paper regarding the relationships between proposed beliefs are 1) proposed beliefs can always be represented in a tree structure, i.e., each time a belief is proposed, it is intended as support for only one other belief, and 2) an agent cannot provide both evidence to support a belief and evidence to attack it in the same turn during the dialogue. Relaxing the first assumption complicates the selection of focus during both the modification and information-sharing processes. For instance, consider the proposed belief structure in Figure 17. Suppose that the system evaluates the proposal and rejects all proposed beliefs A, B, C, D, and E. In selecting the focus of modification, should the system now prefer addressing D because its resolution will potentially resolve the conflict about both A and B? What if D is the belief which the system has the least amount of evidence against? We are interested in investigating how the current algorithms for conflict resolution and information-sharing will need to be modified to accommodate such belief structures. Relaxing the second assumption, on the other hand, affects the evaluation and information-sharing processes. For instance, in the following dialogue segment, the speaker utilizes a generalized version of the Invite-Attack strategy to present evidence both for and against the main belief: A: I think Dr. Smith is going on sabbatical next year.</Paragraph> <Paragraph position="5"> I heard he was offered a visiting position at Bell Labs, but then again I heard he's going to be teaching AI next semester.</Paragraph> <Paragraph position="6"> Further research is needed to determine how the current evaluation process should be altered to handle dialogues such as the above. In particular, we are interested in investigating how uncertainty about a piece of proposed evidence should affect the evaluation of the belief that it is intended to support, as well as how the selection of the Computational Linguistics Volume 24, Number 3 focus of information-sharing should be affected when a single turn can simultaneously provide evidence both for and against a belief.</Paragraph> <Paragraph position="7"> Finally, in our current work, we have focused on task-oriented collaborative planning dialogues where the agents explored only one plan at a time, and have shown how our Propose-Evaluate-Modify framework is capable of modeling such dialogues.</Paragraph> <Paragraph position="8"> Although in the collaborative planning dialogues we analyzed, this constraint did not seem to pose any problems, in certain other domains, such as the appointment scheduling domain, the agents may be more likely to explore several options at once instead of focusing on only one option at a time (Ros6 et al. 1995). We are interested in investigating how our Propose-Evaluate-Modify framework can be extended to account for such discourse with multiple threads. In particular, we are interested in finding out whether the Propose-Evaluate-Modify framework should be revised so that a single instance of the cycle (allowing for recursion) may model such discourse, or whether each thread should be modeled by an instance of the Propose-Evaluate-Modify cycle and an overarching structure developed to model interaction among the multiple cycles.</Paragraph> </Section> <Section position="4" start_page="395" end_page="397" type="sub_section"> <SectionTitle> 8.4 Concluding Remarks </SectionTitle> <Paragraph position="0"> This paper has presented a model for response generation in collaborative planning dialogues. Our model improves upon previous response generation systems by specifying strategies for content selection for response generation in order to resolve (potential) conflict. It includes both algorithms for information-sharing when the system is uncertain about whether to accept a proposal by the user and algorithms for conflict resolution when the system rejects a proposal. The overall model is captured in a recursive Propose-Evaluate-Modify framework that can handle embedded subdialogues. null A. Appendix: Sample Dialogue from Evaluation Questionnaire In this section, we include a sample dialogue from the questionnaire given to our judges for the evaluation of CORE, discussed in Section 7.2. The dialogue is annotated to indicate the primary purpose for its inclusion in the questionnaire, CORE's response in each dialogue segment, as well as how CORE's response generation strategies are modified to generate each alternative response. These annotations are included as comments (surrounded by/* and */) and were not available to the judges during the evaluation process.</Paragraph> <Paragraph position="1"> Question 1 /* This dialogue corresponds to CN1 in Section 7.2. The primary purpose of this dialogue segment is to evaluate the strategies adopted by the Select-Focus-Modification algorithm */ Suppose that in previous dialogue, CORE has proposed that the professor of CS481 (an AI course) is Dr. Seltzer, and that the user responds by giving the following 4 utterances in a single turn: (utt i.i) U: The professor of CS481 is not Dr. Seltzer.</Paragraph> <Paragraph position="2"> (utt 1.2) Dr. Seltzer is going on sabbatical in 1998.</Paragraph> <Paragraph position="3"> (utt 1.3) Dr. Seltzer has been at the university for 6 years.</Paragraph> <Paragraph position="4"> (utt 1.4) Also, I think Dr. Seltzer's expertise is computer networks.</Paragraph> <Paragraph position="5"> The user's utterances are interpreted as follows: * Main belief: a strong belief in ~professor(CS481,Seltzer) (utt 1.1).</Paragraph> <Paragraph position="6"> (utt 1.4).</Paragraph> <Paragraph position="7"> CORE's private knowledge that affects its acceptance/rejection of the beliefs proposed in each user utterance (noted in boldface below) is as follows: (Utt 1.1) : a certain belief that the professor of CS481 is Dr. Seltzer. /* certain = warranted */ (Utt 1.2) : (1) a very strong belief that Dr. Seltzer has not been given tenure (and therefore is not going on sabbatical in 1998), (2) a very-strong belief that Dr. Seltzer will be going on sabbatical in 1999 (and hence is not going on sabbatical in 1998), and (3) a strong belief that Dr. Seltzer will be the CS graduate program chair in 1998 (and thus will not go on sabbatical in 1998).</Paragraph> <Paragraph position="8"> (Utt 1.3) : a certain belief that Dr. Seltzer has only been at the university for 5 years.</Paragraph> <Paragraph position="9"> (Utt 1.4) : a very strong belief that Dr. Seltzer's expertise is AI. Based on CORE's beliefs, please judge the following alternative responses to the user's utterances. Note again that the criteria for your judgment should be the conciseness, coherence, and effectiveness of the response resolving CORE and the user's conflict about the main belief (Dr. Seltzer not teaching CS481, in this example). Response 1-1: /* Select-Focus-Modification returns all rejected beliefs; this is response CNI.1 in Table 4 */ CORE: Dr. Seltzer is the professor of CS481.</Paragraph> <Paragraph position="10"> Dr. Seltzer is not going on sabbatical in 1998, since he has not been given tenure.</Paragraph> <Paragraph position="11"> Furthermore, Dr. Seltzer has only been at the university for 5 years.</Paragraph> <Paragraph position="12"> Finally, Dr. Seltzer's expertise is in AI, not computer networks.</Paragraph> <Paragraph position="13"> Response 1-2: /* CORE's actual response */ CORE: Dr. Seltzer is not going on sabbatical in 1998, since he hasn't been given tenure.</Paragraph> <Paragraph position="14"> Also, Dr. Seltzer's expertise is not computer networks.</Paragraph> <Paragraph position="15"> Computational Linguistics Volume 24, Number 3 i.e., algorithm does not infer whether or not refuting a child belief will contribute to refutation of its parent belief*~ CORE: Dr. Seltzer hasn't been at the university for 6 years.</Paragraph> <Paragraph position="16"> Response 1-4: /* Select-Focus-Modification remains unchanged, but Select-Justification returns all available evidence against selected foci; this is response CN1.2 in Table 4 */ CORE: Dr. Seltzer is not going on sabbatical in 1998, because he hasn't been given tenure, because he is planning on going on sabbatical in 1999, and because he is the CS graduate program chair in 1998.</Paragraph> <Paragraph position="17"> Also, Dr. Seltzer's expertise is AI, not computer networks.</Paragraph> <Paragraph position="18"> Response 1-5: /* Preference ordering in Select-Focus-Modification is changed to prefer addressing a parent belief to refuting a child belief*~ CORE: Dr. Seltzer is the professor of CS481.</Paragraph> </Section> </Section> class="xml-element"></Paper>