File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/abstr/97/p97-1036_abstr.xml

Size: 977 bytes

Last Modified: 2025-10-06 13:48:58

<?xml version="1.0" standalone="yes"?>
<Paper uid="P97-1036">
  <Title>Unification-based Multimodal Integration</Title>
  <Section position="1" start_page="0" end_page="0" type="abstr">
    <SectionTitle>
Abstract
</SectionTitle>
    <Paragraph position="0"> Recent empirical research has shown conclusive advantages of multimodal interaction over speech-only interaction for map-based tasks. This paper describes a multimodal language processing architecture which supports interfaces allowing simultaneous input from speech and gesture recognition. Integration of spoken and gestural input is driven by unification of typed feature structures representing the semantic contributions of the different modes. This integration method allows the component modalities to mutually compensate for each others' errors. It is implemented in Quick-Set, a multimodal (pen/voice) system that enables users to set up and control distributed interactive simulations.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML