File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/97/p97-1062_concl.xml

Size: 1,788 bytes

Last Modified: 2025-10-06 13:57:52

<?xml version="1.0" standalone="yes"?>
<Paper uid="P97-1062">
  <Title>Learning Parse and Translation Decisions</Title>
  <Section position="9" start_page="488" end_page="488" type="concl">
    <SectionTitle>
8 Conclusion
</SectionTitle>
    <Paragraph position="0"> We try to bridge the gap between the typically hardto-scale hand-crafted approach and the typically large-scale but context-poor statistical approach for unrestricted text parsing.</Paragraph>
    <Paragraph position="1"> Using * a rich and unified context with 205 features, * a complex parse action language that allows integrated part of speech tagging and syntactic and semantic processing, * a sophisticated decision structure that generalizes traditional decision trees and lists, * a balanced use of machine learning and micromodular background knowledge, i.e. very small pieces of highly' independent information * a modest number of interactively acquired examples from the Wall Street Journal, our system CONTEX * computes parse trees and translations fast, because it uses a deterministic single-pass parser, * shows good robustness when encountering novel constructions, * produces good parsing results comparable to those of the leading statistical methods, and * delivers competitive results for machine translations. null While many limited-context statistical approaches have already reached a performance ceiling, we still expect to significantly improve our results when increasing our training base beyond the currently 256 sentences, because the learning curve hasn't flattened out yet and adding substantially more examples is still very feasible. Even then the training size will compare favorably with the huge number of training sentences necessary for many statistical systems.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML