File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/04/n04-4012_intro.xml

Size: 1,582 bytes

Last Modified: 2025-10-06 14:02:19

<?xml version="1.0" standalone="yes"?>
<Paper uid="N04-4012">
  <Title>UI on the Fly: Generating a Multimodal User Interface</Title>
  <Section position="2" start_page="0" end_page="0" type="intro">
    <SectionTitle>
1 Introduction
</SectionTitle>
    <Paragraph position="0"> Multimodal user interfaces are everywhere. The use of a keyboard and mouse on a desktop PC is ubiquitous, if not natural. However, the click-then-type paradigm of common interfaces misses the cross-modal synchronization of timing and meaning that is evident in human-human communication. With coordinated output, novice users could get explanations (redundant content) and experienced users could receive additional (complementary) information, increasing the bandwidth of the interface. Coordinated input (&amp;quot;put that there!&amp;quot;) speeds up input and relieves speech recognition of notoriously hardto-recognize referring expressions such as names. If a user interface is generated on the fly, it can adapt to the situation and special needs of the user as well as to the device.</Paragraph>
    <Paragraph position="1"> While users are not necessarily prone to make multi-modal inputs (Oviatt, 1999), they can still integrate complementary output or use redundant output in noisy situations. Consequently, this paper deals with generating output. We propose a grammar formalism that generalizes decisions about how to deliver content in an adaptable multimodal user interface. We demonstrate it in the context of a user interface for a mobile personal information manager.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML