File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/04/n04-3002_intro.xml

Size: 3,041 bytes

Last Modified: 2025-10-06 14:02:18

<?xml version="1.0" standalone="yes"?>
<Paper uid="N04-3002">
  <Title>ITSPOKE: An Intelligent Tutoring Spoken Dialogue System</Title>
  <Section position="2" start_page="0" end_page="0" type="intro">
    <SectionTitle>
1 Introduction
</SectionTitle>
    <Paragraph position="0"> The development of computational tutorial dialogue systems has become more and more prevalent (Aleven and Rose, 2003), as one method of attempting to close the performance gap between human and computer tutors.</Paragraph>
    <Paragraph position="1"> While many such systems have yielded successful evaluations with students, most are currently text-based (Evens et al., 2001; Aleven et al., 2001; Zinn et al., 2002; VanLehn et al., 2002). There is reason to believe that speech-based tutorial dialogue systems could be even more effective. Spontaneous self-explanation by students improves learning gains during human-human tutoring (Chi et al., 1994), and spontaneous self-explanation occurs more frequently in spoken tutoring than in text-based tutoring (Hausmann and Chi, 2002). In human-computer tutoring, the use of an interactive pedagogical agent that communicates using speech rather than text output improves student learning, while the visual presence or absence of the agent does not impact performance (Moreno et al., 2001). In addition, it has been hypothesized that the success of computer tutors could be increased by recognizing and responding to student emotion. (Aist et al., 2002) have shown that adding emotional processing to a dialogue-based reading tutor increases student persistence. Information in the speech signal such as prosody has been shown to be a rich source of information for predicting emotional states in other types of dialogue interactions (Ang et al., 2002; Lee et al., 2002; Batliner et al., 2003; Devillers et al., 2003; Shafran et al., 2003).</Paragraph>
    <Paragraph position="2"> With advances in speech technology, several projects have begun to incorporate basic spoken language capabilities into their systems (Mostow and Aist, 2001; Fry et al., 2001; Graesser et al., 2001; Rickel and Johnson, 2000).</Paragraph>
    <Paragraph position="3"> However, to date there has been little examination of the rami cations of using a spoken modality for dialogue tutoring. To assess the impact and evaluate the utility of adding spoken language capabilities to dialogue tutoring systems, we have built ITSPOKE (Intelligent Tutoring SPOKEn dialogue system), a spoken dialogue system that uses the Why2-Atlas conceptual physics tutoringsystem (VanLehn et al., 2002) as its back-end. We are using ITSPOKE as a platform for examining whether acoustic-prosodic information can be used to improve the recognition of pedagogically useful information such as student emotion (Forbes-Riley and Litman, 2004; Litman and Forbes-Riley, 2004), and whether speech can improve the performance evaluations of dialogue tutoring systems (e.g., as measured by learning gains, ef ciency, usability, etc.) (Ros*e et al., 2003).</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML