File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/04/w04-1012_intro.xml

Size: 1,633 bytes

Last Modified: 2025-10-06 14:02:35

<?xml version="1.0" standalone="yes"?>
<Paper uid="W04-1012">
  <Title>Automatic Evaluation of Summaries Using Document Graphs</Title>
  <Section position="2" start_page="0" end_page="0" type="intro">
    <SectionTitle>
1 Introduction
</SectionTitle>
    <Paragraph position="0"> Document summarization has been the focus of many researchers for the last decade, due to the increase in on-line information and the need to find the most important information in a (set of) document(s). One of the biggest challenges in text summarization research is how to evaluate the quality of a summary or the performance of a summarization tool. There are different approaches to evaluate overall quality of a summarization system. In general, there are two types of evaluation categories: intrinsic and extrinsic (Sparck-Jones and Galliers, 1996). Extrinsic approaches measure the quality of a summary based on how it affects certain tasks. In intrinsic approaches, the quality of the summarization is evaluated based on analysis of the content of a summary itself. In both categories human involvement is used to judge the summarization outputs. The problem with having humans involved in evaluating summaries is that we can not hire human judges every time we want to evaluate summaries (Mani and Maybury, 1999). In this paper, we discuss a new automated way to evaluate machine-generated summaries without the need to have human judges being involved which decreases the cost of determining which summarization system is best. In our experiment, we used data from Document Understanding Conference 2002 (DUC2002). null</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML