File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/00/w00-0304_intro.xml

Size: 1,644 bytes

Last Modified: 2025-10-06 14:00:55

<?xml version="1.0" standalone="yes"?>
<Paper uid="W00-0304">
  <Title>NJFun: A Reinforcement Learning Spoken Dialogue System</Title>
  <Section position="2" start_page="0" end_page="0" type="intro">
    <SectionTitle>
1 Introduction
</SectionTitle>
    <Paragraph position="0"> Using the formalism of Markov decision processes (MDPs) and the algorithms of reinforcement learning (RL) has become a standard approach to many AI problems that involve an agent learning to optimize reward by interaction with its environment (Sutton and Barto, 1998). We have adapted the methods of RL to the problem of automatically learning a good dialogue strategy in a fielded spoken dialogue system. Here is a summary of our proposed methodology for developing and evaluating spoken dialogue systems using R.L: * Choose an appropriate reward measure for dialogues, and an appropriate representation for dialogue states.</Paragraph>
    <Paragraph position="1"> * Build an initial state-based training system that creates an exploratory data set. Despite being exploratory, this system should provide the desired basic functionality.</Paragraph>
    <Paragraph position="2">  In this demonstration session paper, we briefly describe our system, present some sample dialogues, and summarize our main contributions and limitations. Full details of our work (e.g. our reinforcement learning methodology, analysis establishing the veracity of the MDP we learn, a description of an experimental evaluation of NJFun, analysis of our learned dialogue strategy) can be found in two forthcoming technical papers (Singh et al., 2000; Litman et al., 2000).</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML