File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/01/p01-1006_intro.xml

Size: 1,763 bytes

Last Modified: 2025-10-06 14:01:14

<?xml version="1.0" standalone="yes"?>
<Paper uid="P01-1006">
  <Title>Evaluation tool for rule-based anaphora resolution methods</Title>
  <Section position="2" start_page="0" end_page="0" type="intro">
    <SectionTitle>
1 Introduction
</SectionTitle>
    <Paragraph position="0"> The evaluation of any NLP algorithm or system should indicate not only its efficiency or performance, but should also help us discover what a new approach brings to the current state of play in the field. To this end, a comparative evaluation with other well-known or similar approaches would be highly desirable.</Paragraph>
    <Paragraph position="1"> We have already voiced concern (Mitkov, 1998a), (Mitkov, 2000b) that the evaluation of anaphora resolution algorithms and systems is bereft of any common ground for comparison due not only to the difference of the evaluation data, but also due to the diversity of pre-processing tools employed by each anaphora resolution system. The evaluation picture would not be accurate even if we compared anaphora resolution systems on the basis of the same data since the pre-processing errors which would be carried over to the systems' outputs might vary. As a way forward we have proposed the idea of the evaluation workbench (Mitkov, 2000b) - an open-ended architecture which allows the incorporation of different algorithms and their comparison on the basis of the same pre-processing tools and the same data. Our paper discusses a particular configuration of this new evaluation environment incorporating three approaches sharing a common &amp;quot;knowledge-poor philosophy&amp;quot;: Kennedy and Boguraev's (1996) parser-free algorithm, Baldwin's (1997) CogNiac and Mitkov's (1998b) knowledge-poor approach.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML