File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/04/w04-2326_intro.xml
Size: 3,295 bytes
Last Modified: 2025-10-06 14:02:46
<?xml version="1.0" standalone="yes"?> <Paper uid="W04-2326"> <Title>Annotating Student Emotional States in Spoken Tutoring Dialogues</Title> <Section position="2" start_page="0" end_page="0" type="intro"> <SectionTitle> 1 Introduction </SectionTitle> <Paragraph position="0"> This paper describes a coding scheme for annotating student emotional states in spoken dialogue tutoring corpora, and analyzes the scheme not only for its reliability, but also for its utility in developing a spoken dialogue tutoring system that can model and respond to student emotions. Motivation for this work comes from the performance discrepancy between human tutors and current machine tutors: typically, students tutored by human tutors achieve higher learning gains than students tutored by computer tutors. The development of computational tutorial dialogue systems (Ros*e and Aleven, 2002) represents one method of closing this performance gap, e.g. it is hypothesized that dialogue-based tutors allow greater adaptivity to students' beliefs and misconceptions. Another method for closing this performance gap involves incorporating emotion prediction and adaptation into computer tutors (Kort et al., 2001; Evens, 2002).</Paragraph> <Paragraph position="1"> For example (Aist et al., 2002) have shown that adding human-provided emotional scaffolding to an automated reading tutor increases student persistence. This suggests that the success of computer dialogue tutors could be increased by responding to both what a student says and how s/he says it, e.g. with con dence or uncertainty.</Paragraph> <Paragraph position="2"> To assess the impact of adding emotion modeling to dialogue tutoring systems, we are building ITSPOKE (Intelligent Tutoring SPOKEn dialogue system), a spoken dialogue system that uses the Why2-Atlas conceptual physics tutoring system (VanLehn et al., 2002) as its back-end. 1 Our rst step towards incorporating emotion processing into ITSPOKE is to develop a reliable annotation scheme for student emotions. Our next step will be to use the data that has been annotated according to this scheme to enhance ITSPOKE to dynamically predict and adapt to student emotions. This adds additional constraints on our annotation scheme besides good reliability, namely that our annotations are predictable by ITSPOKE with a high degree of accuracy (automatically and in real-time), and that they are expressive enough to support the range of desired system adaptations.</Paragraph> <Paragraph position="3"> In Section 2 we review previous work in emotion annotation for spoken dialogue systems. In Section 3 we discuss our tutoring research project and corpora. In Section 4 we present an emotion annotation scheme for this domain. In Section 5 we analyze our scheme with respect to interannotator agreement and predictive accuracy, using a corpus of human tutoring dialogues. Our agreement indicates that our scheme is reliable, while machine learning experiments on annotated data indicate that our emotion labels can be predicted with a high degree of accuracy.</Paragraph> <Paragraph position="4"> In Section 6 we analyze more expressive versions of our scheme, and discuss differences between annotating human and computer spoken tutoring dialogues.</Paragraph> </Section> class="xml-element"></Paper>