File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/abstr/06/n06-3001_abstr.xml

Size: 1,063 bytes

Last Modified: 2025-10-06 13:44:55

<?xml version="1.0" standalone="yes"?>
<Paper uid="N06-3001">
  <Title>Incorporating Gesture and Gaze into Multimodal Models of Human-to-Human Communication</Title>
  <Section position="1" start_page="0" end_page="0" type="abstr">
    <SectionTitle>
Abstract
</SectionTitle>
    <Paragraph position="0"> Structural information in language is important for obtaining a better understanding of a human communication (e.g., sentence segmentation, speaker turns, and topic segmentation). Human communication involves a variety of multimodal behaviors that signal both propositional content and structure, e.g., gesture, gaze, and body posture. These non-verbal signals have tight temporal and semantic links to spoken content. In my thesis, I am working on incorporating non-verbal cues into a multimodal model to better predict the structural events to further improve the understanding of human communication.</Paragraph>
    <Paragraph position="1"> Some research results are summarized in this document and my future research plan is described.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML