File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/evalu/00/c00-2098_evalu.xml

Size: 2,878 bytes

Last Modified: 2025-10-06 13:58:34

<?xml version="1.0" standalone="yes"?>
<Paper uid="C00-2098">
  <Title>A Context-Sensitive Model for Probabilistie LR Parsing of Spoken Language with Transformation-Based Postproeessing</Title>
  <Section position="7" start_page="680" end_page="681" type="evalu">
    <SectionTitle>
5 Evaluation results
</SectionTitle>
    <Paragraph position="0"> At the time tiffs paper is written we have done several experiments on different aspects of out work, some of which ate published here.</Paragraph>
    <Paragraph position="1"> 5.1. Experiments on context sensitivity The question of this experiment was: &amp;quot;We have developed a probabilistic parsing model using more context information. Does it generate any benefit?&amp;quot; To answer this question we trained the parser on 19,750 german trees and tested on 1,000 (unseen) utterances with contexts of different sizes (the contexts K3, K4 and K5 am explained in section 3.3). As shown in figure 4 (the x-axis is a weight that controls tile influence of the context in the bacldng-off process) labeled precision of the K5-parser performs always better than the parsers using less context. Labeled recall of the K5-parser is superior as long as the large context is not overweighted. Higher weigh|s increase some kind of &amp;quot;memory effect&amp;quot; so that the trained model does not generalize well on (unseen) test data. The Ol)timal K5 weight is around 0.1 and 0.2 as you can see in figure 4.</Paragraph>
    <Paragraph position="2"> 5.2. Evaluation of the probabilistie parser We ewduated the parser on German, English and Japanese Verbmobil data. The results of this ewtluation are given in the following table:  It is quite interesting that despite of tile low exact match rate out parser achieves high precision/recall values on parsed utterances. The reason is that we have - for the semantics construction process - a large number of nomtenninal symbols in out&amp;quot; context-fiee grammars and the parser often chooses  only one or two slightly incorrect symbols per parse. The mean parsing time per utterance was about 400ms for German and English and about 30ms for Japanese on a 166-Mhz Sun Ultra-I workstation.</Paragraph>
    <Paragraph position="3"> 5.3. Influence of transformation-based error correction It is important to have a very high exact match rate for the semantics construction process. As showu in the table of section 5.2. the exact match rates are quite low thus we have learned transformations from the training data to improve the output of the German and English parser (there was not enough training data to do so for Japanese) and evaluated the results shown in the following table (TT is an abbreviation for Tree Translfbrmations).</Paragraph>
    <Paragraph position="4"> As shown in this table the tree transformations improve the exact match rate relatively by 16% for</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML