File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/92/h92-1077_metho.xml

Size: 1,939 bytes

Last Modified: 2025-10-06 14:13:09

<?xml version="1.0" standalone="yes"?>
<Paper uid="H92-1077">
  <Title>SESSION 12: CONTINUOUS SPEECH RECOGNITION AND EVALUATION II*</Title>
  <Section position="3" start_page="379" end_page="380" type="metho">
    <SectionTitle>
VERBALIZED PUNCTUATIONS VS
NON-VERBALIZED PUNCTUATION
</SectionTitle>
    <Paragraph position="0"> First, a sampling of the comments in favor of continuing to collect data with a split between VP and NVP.</Paragraph>
    <Paragraph position="1"> Janet Baker: People using real dictation systems use VP, so any recognition system for dictation must handle VP.</Paragraph>
    <Paragraph position="2"> Doug Paul: Both NVP and VP are needed to support both general recognition and dictation; reading with VP may be awkward at first, but not hard to get used to. Michael Picheny: Might as well use VP since it is easier for the recognizer, and people who dictate do not seem to mind. Emphasized his strong support for VP.</Paragraph>
    <Paragraph position="3"> Second, a sampling of comments generally against a lot more collection of VP data.</Paragraph>
    <Paragraph position="4"> Rich Schwartz: Given recording problems with VP, would be happier with NVP for general recognition.</Paragraph>
    <Paragraph position="5">  effort, so it's very important scientifically to have a correct language model, as shown in paper by Paul, Baker, and Baker at the 1990 Speech and Natural Language Workshop. Prompting texts are a pragmatic way to do this. Jordan Cohen: Prompting should be an empirical issue -- do real dictation experiment and see what people do. Bob Moore: Preprocessing is a small effect in the 20K language model, so it should be possible to generate language models from text without constrMning the prompts.  Victor Zue: Cited the MIT study which showed the variability of responses from unpreprocessed prompts; also raised the issue (not discussed further) of selection of limited vocabulary.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML