File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/95/p95-1007_concl.xml

Size: 2,450 bytes

Last Modified: 2025-10-06 13:57:27

<?xml version="1.0" standalone="yes"?>
<Paper uid="P95-1007">
  <Title>Tagged Dependency Tagged Adjacency o I I</Title>
  <Section position="5" start_page="51" end_page="51" type="concl">
    <SectionTitle>
4 Conclusion
</SectionTitle>
    <Paragraph position="0"> The experiments above demonstrate a number of important points. The most general of these is that even quite crude corpus statistics can provide information about the syntax of compound nouns. At the very least, this information can be applied in broad coverage parsing to assist in the control of search. I have also shown that with a corpus of moderate size it is possible to get reasonable results without using a tagger or parser by employing a customised training pattern. While using windowed co-occurrence did not help here, it is possible that under more data sparse conditions better performance could be achieved by this method.</Paragraph>
    <Paragraph position="1"> The significance of the use of conceptual association deserves some mention. I have argued that without it a broad coverage system would be impossible.</Paragraph>
    <Paragraph position="2"> This is in contrast to previous work on conceptual association where it resulted in little improvement on a task which could already be performed. In this study, not only has the technique proved its worth by supporting generality, but through generalisation of training information it outperforms the equivalent lexical association approach given the same information. null Amongst all the comparisons performed in these experiments one stands out as exhibiting the greatest contrast. In all experiments the dependency model provides a substantial advantage over the adjacency model, even though the latter is more prevalent in proposals within the literature. This result is in accordance with the informal reasoning given in section 1.3. The model also has the further commendation that it predicts correctly the observed proportion of left-branching compounds found in two independently extracted test sets.</Paragraph>
    <Paragraph position="3"> In all, the most accurate technique achieved an accuracy of 81% as compared to the 67% achieved by guessing left-branching. Given the high frequency of occurrence of noun compounds in many texts, this suggests tha; the use of these techniques in probabilistic parsers will result in higher performance in broad coverage natural language processing.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML