File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/relat/02/c02-1075_relat.xml
Size: 1,890 bytes
Last Modified: 2025-10-06 14:15:40
<?xml version="1.0" standalone="yes"?> <Paper uid="C02-1075"> <Title>A Novel Disambiguation Method For Unification-Based Grammars Using Probabilistic Context-Free Approximations</Title> <Section position="7" start_page="89" end_page="89" type="relat"> <SectionTitle> 6 Related Work and Discussion </SectionTitle> <Paragraph position="0"> The most direct points of comparison of our method are the approaches of Johnson et al. (1999) and Riezler et al. (2000), esp. since they use the same evaluation criteria than we use.</Paragraph> <Paragraph position="1"> In the first approach, log-linear models for LFG grammars were trained on treebanks of about 400 sentences. Precision was evaluated for an ambiguity rate of 10 (using cross-validation), and achieved 59%. If compared to this, our best models achieve a gain of about 28%. However, a comparison is difficult, since the disambiguation task is more easy for our models, due to the low ambiguity rate of our testing corpus. However, in contrast to our approach, supervised training was used by Johnson et al. (1999).</Paragraph> <Paragraph position="2"> In the second approach, log-linear models of LFG grammars were trained on a text corpus of about 36,000 sentences. Precision was evaluated on 550 sentences with an ambiguity rate of 5.4, and achieved 86%. Again, a comparison is difficult.</Paragraph> <Paragraph position="3"> The best models of Riezler et al. (2000) achieved a precision, which is only slightly lower than ours.</Paragraph> <Paragraph position="4"> However, their results were yielded using a corpus, which is about 80 times as big as ours.</Paragraph> <Paragraph position="5"> Similarly, a comparison is difficult for most other state-of-the-art PCFG-based statistical parsers, since different training and test data, and most importantly, different evaluation criteria were used.</Paragraph> </Section> class="xml-element"></Paper>