File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/abstr/06/n06-1035_abstr.xml
Size: 1,062 bytes
Last Modified: 2025-10-06 13:44:49
<?xml version="1.0" standalone="yes"?> <Paper uid="N06-1035"> <Title>Comparing the Utility of State Features in Spoken Dialogue Using Reinforcement Learning</Title> <Section position="2" start_page="0" end_page="0" type="abstr"> <SectionTitle> Abstract </SectionTitle> <Paragraph position="0"> Recent work in designing spoken dialogue systems has focused on using Reinforcement Learning to automatically learn the best action for a system to take at any point in the dialogue to maximize dialogue success. While policy development is very important, choosing the best features to model the user state is equally important since it impacts the actions a system should make. In this paper, we compare the relative utility of adding three features to a model of user state in the domain of a spoken dialogue tutoring system. In addition, we also look at the effects of these features on what type of a question a tutoring system should ask at any state and compare it with our previous work on using feedback as the system action.</Paragraph> </Section> class="xml-element"></Paper>