File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/06/p06-1026_concl.xml
Size: 1,762 bytes
Last Modified: 2025-10-06 13:55:13
<?xml version="1.0" standalone="yes"?> <Paper uid="P06-1026"> <Title>Learning the Structure of Task-driven Human-Human Dialogs</Title> <Section position="10" start_page="206" end_page="206" type="concl"> <SectionTitle> 8 Conclusions </SectionTitle> <Paragraph position="0"> In order to build a dialog manager using a data-driven approach, the following are necessary: a model for labeling/interpreting the user's current action; a model for identifying the current subtask/topic; and a model for predicting what the system's next action should be. Prior research in plan identi cation and in dialog act labeling has identi ed possible features for use in such models, but has not looked at the performance of different feature sets (re ecting different amounts of context and different views of dialog) across different domains (label sets). In this paper, we compared the performance of a dialog act labeler/predictor across three different tag sets: one using very detailed, domain-speci c dialog acts usable for interpretation and generation; and two using general-purpose dialog acts and corpora available to the larger research community. We then compared two models for subtask labeling: a at, chunk-based model and a hierarchical, parsing-based model. Findings include that simpler chunk-based models perform as well as hierarchical models for subtask labeling and that a dialog act feature is not helpful for subtask labeling.</Paragraph> <Paragraph position="1"> In on-going work, we are using our best performing models for both DM and LG components (to predict the next dialog move(s), and to select the next system utterance). In future work, we will address the use of data-driven dialog management to improve SLU.</Paragraph> </Section> class="xml-element"></Paper>