File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/04/n04-4039_concl.xml
Size: 1,876 bytes
Last Modified: 2025-10-06 13:54:04
<?xml version="1.0" standalone="yes"?> <Paper uid="N04-4039"> <Title>Converting Text into Agent Animations: Assigning Gestures to Text</Title> <Section position="6" start_page="2" end_page="2" type="concl"> <SectionTitle> 5 Discussion and Conclusion </SectionTitle> <Paragraph position="0"> We have addressed the issues related to assigning gestures to text and converting the text into agent animations synchronized with speech. First, our empirical study identified useful lexical and syntactic information for assigning gestures to plain text. Specifically, when a bunsetsu unit is a constituent of coordination, gestures occur almost half the time. Gestures also frequently co-occur with nominal phrases modified by a clause. These findings suggest that syntactic structure is a stronger determinant of gesture occurrence than theme or rheme and given or new information specified by local grammatical cues.</Paragraph> <Paragraph position="1"> We plan to enhance our model by incorporating more general discourse level information, though the current system exploits cue words as a very partial kind of discourse information. For instance, gestures frequently occur at episode boundaries. Pushing and popping of a discourse segment (Grosz & Sidner, 1986) may also affect gesture occurrence. Therefore, by integrating a discourse analyzer into the LTM, more general structural discourse information can be used in the model.</Paragraph> <Paragraph position="2"> Another important direction is to evaluate the effectiveness of agent gestures in actual human-agent interaction. We expect that if our model can generate gestures with appropriate timing for emphasizing important words and phrases, users can perceive agent presentations as being more alive and comprehensible. We plan to conduct a user study to examine this hypothesis.</Paragraph> </Section> class="xml-element"></Paper>