File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/00/c00-1017_concl.xml

Size: 2,947 bytes

Last Modified: 2025-10-06 13:52:43

<?xml version="1.0" standalone="yes"?>
<Paper uid="C00-1017">
  <Title>Probabilistic Parsing and Psychological Plausibility</Title>
  <Section position="7" start_page="115" end_page="116" type="concl">
    <SectionTitle>
7 Conclusions
</SectionTitle>
    <Paragraph position="0"> A central challenge in computational psycholinguistics is to explaiu how it is that people are so accurate and robust in processing language.</Paragraph>
    <Paragraph position="1"> Given the substantial psycholinguistic evidence tbr statistical cognitive mechanisms, our objective in this paper was to assess the plausibility of using wide-coverage probabilistic parsers to model lmman linguistic performance. In particular, we set out to investigate the effects of imposing incremental processing and significant memory limitations on such parsers.</Paragraph>
    <Paragraph position="2"> The central finding of our experiments is that incremental parsing with massive (97% - 99%) pruning of the search space does not impair the accuracy of stochastic context-free parsers.</Paragraph>
    <Paragraph position="3"> This basic finding was rotmst across different settings of the beams and tbr the original Penn Treebank encoding as well as the parent encoding. We did however, observe significantly reduced memory and time requirements when using combined active/inactive edge filtering. To our knowledge, this is the first investigation on tree-bank grammars that systematically varies the beam tbr pruning.</Paragraph>
    <Paragraph position="4"> Our ainl in this paper is not to challenge state-of-the-art parsing accuracy results. For our experiments we used a purely context-ti'ee stochastic parser combined with a very simple pruning scheme based on simple &amp;quot;unigram&amp;quot; probabilities, and no use of right context. We do, however suggest that our result should apply to richer, more sophistacted probabilistic SComparison of results is not straight-forward since (Roark and Johnson, 1999) report accuracies only tbr those sentences for which a parse tree was generated (between 93 and 98% of the sentences), while our parser (except for very small Imams) generates parses for virtually all sentences, hence we report; accuracies for all sentences.</Paragraph>
    <Paragraph position="5">  models, e.g. when adding word st~tistics to the model (Charni~dC/, 1997).</Paragraph>
    <Paragraph position="6"> We thereibre conclude theft wide-covcr~ge, prol)~fl)ilistic pnrsers do not suffer impaired a('curacy when subject to strict cognii;iv(~ meXnOl'y limitntions mM incremental processing. Fm'thermore, parse times are sut)stm~ti~fily reduced. This sltggt',sts that it; m~y lie fruit;tiff to tlur,sllC the use of these models within C/',onlt)utational l)sycholinguistics, where it: is necessary to explain not Olfly the relatively r~tr(; 'pathologies' of the hmmm parser, but also its mor(; fl'e(tuently ol)scrved ~u:(:ur~my m~(1 rol)llSiilless.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML