File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/94/p94-1001_concl.xml

Size: 2,912 bytes

Last Modified: 2025-10-06 13:57:21

<?xml version="1.0" standalone="yes"?>
<Paper uid="P94-1001">
  <Title>I I User I I NL Input I NL Interpretation Modules Observed Conversation Acts Dialogue \] Manager j~ I Domain Directives I Domain Task Interaction 1 Modules '~'1 I I NL Output j- I NL Generation 1 Module Intended Conversation Acts Domain Observations and Directive Responses</Title>
  <Section position="7" start_page="19" end_page="19" type="concl">
    <SectionTitle>
5 Discussion
</SectionTitle>
    <Paragraph position="0"> We have argued that obligations play an important role in accounting for the interactions in dialog. Obligations do not replace the plan-based model, but augment it. The resulting model more readily accounts for discourse behavior in adversarial situations and other situations where it is implausible that the agents adopt each others goals. The obligations encode learned social norms, and guide each agent's behavior without the need for intention recognition or the use of shared plans at the discourse level. While such complex intention recognition may be required in some complex interactions, it is not needed to handle the typical interactions of everyday discourse. Furthermore, there is no requirement for mutually-agreed upon rules that create obligations. Clearly, the more two agents agree on the rules, the smoother the interaction becomes, and some rules are clearly virtually universal. But each agent has its own set of individual rules, and we do not need to appeal to shared knowledge to account for local discourse behavior.</Paragraph>
    <Paragraph position="1"> We have also argued that an architecture that uses obligations provides a much simpler implementation than the strong plan-based approaches. In particular, much of local discourse behavior can arise in a &amp;quot;reactive manner&amp;quot; without the need for complex planning. The other side of the coin, however, is a new set of problems that arise in planning actions that satisfy the multiple constraints that arise from the agent's personal goals and perceived obligations.</Paragraph>
    <Paragraph position="2"> The model presented here allows naturally for a mixed-initiative conversation and varying levels of cooperativity. Following the initiative of the other can be seen as an obligation driven process, while leading the conversation will be goal driven. Representing both obligations and goals explicitly allows the system to naturally shift from one mode to the other. In a strongly cooperative domain, such as TRAINS, the system can subordinate working on its own goals to locally working on concerns of the user, without necessarily having to have any shared discourse plan. In less cooperative situations, the same architecture will allow a system to still adhere to the conversational conventions, but respond in different ways, perhaps rejecting proposals and refusing to answer questions.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML