File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/evalu/96/c96-1010_evalu.xml

Size: 3,398 bytes

Last Modified: 2025-10-06 14:00:22

<?xml version="1.0" standalone="yes"?>
<Paper uid="C96-1010">
  <Title>Parsing spoken language without syntax</Title>
  <Section position="5" start_page="50" end_page="50" type="evalu">
    <SectionTitle>
6. Results
</SectionTitle>
    <Paragraph position="0"> This section presents several experiments that were carried out on our microsemantic analyzer as well as on a LFG parser \[Zweigenbaum 91\].</Paragraph>
    <Paragraph position="1"> These experiments were achieved on the literal written transcription of three corpora of spontaneous speech (table 2) which all correspond to a collaborative task of drawing between two human subjects (wizard of Oz experiment).</Paragraph>
    <Paragraph position="2">  The dialogues were totally unconstrained, so that the corpora are corresponding to natural 53_ spontaneous speech. We compared the two parser according on their robustness and their perplexity.</Paragraph>
    <Section position="1" start_page="50" end_page="50" type="sub_section">
      <SectionTitle>
6.1. Robustness
</SectionTitle>
      <Paragraph position="0"> The table 3 provides the accuracy rates of the two parsers. These results show the benefits of our approach. Around four utterances over five (-PS=83.5%) are indeed processed correctly by the microsemantic parser whereas the LFG's accuracy is limited to 40% on the two first corpora. Its robustness is noticeably higher on the third corpus, which presents a moderate ratio of ungrammatical utterances. The overall performances of the LFG suggest nevertheless that a syntactic approach is not suitable for spontaneous speech, by opposition with the microsemantic one.</Paragraph>
      <Paragraph position="1"> Table 3 .&amp;quot; Average robustness of the LFG and the microsemantic. Accuracy rate = number of correct analyses /number of tested utterances.</Paragraph>
      <Paragraph position="2">  Besides, the independence of microsemantics from the grammatical shape of the utterances warrants its robustness remains relatively unaltered (standard deviation CYn = 0.036).</Paragraph>
    </Section>
    <Section position="2" start_page="50" end_page="50" type="sub_section">
      <SectionTitle>
6.2. Perplexity
</SectionTitle>
      <Paragraph position="0"> As mentioned above, the microsemantic parser ignores in a large extent most of the constraints of linear precedence. This tolerant approach is motivated by the frequent ordering violations spontaneous speech involves. It however leads to a noticeable increase of perplexity. This deterioration is particularly patent for sentences which include at least eight lexemes (Table 4).</Paragraph>
      <Paragraph position="1"> Table 4 : Number of parallel hypothetic structuresl according to utterances' length  At first, we proposed to reduce this perplexity through a cooperation between the microsemantic analyzer and a LFG parser \[Antoine 9411. Although this cooperation achieves a noticeable reduction of the perplexity, it is however ineffective when the LFG parser collapses. This is why we intend at present to inserl, directly some ordering constraints spontaneous speech never violates. \[Rainbow 9411 established that any ordering rule should be expressed lexically. We suggest consequently to order partially the arguments of every lexical subcategorization. Thus, each frame will be assigned few equations which will characterize some ordering priorities among its arguments.</Paragraph>
    </Section>
  </Section>
class="xml-element"></Paper>
Download Original XML