File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/abstr/03/p03-1048_abstr.xml
Size: 887 bytes
Last Modified: 2025-10-06 13:42:55
<?xml version="1.0" standalone="yes"?> <Paper uid="P03-1048"> <Title>Evaluation challenges in large-scale document summarization</Title> <Section position="1" start_page="0" end_page="0" type="abstr"> <SectionTitle> Abstract </SectionTitle> <Paragraph position="0"> We present a large-scale meta evaluation of eight evaluation measures for both single-document and multi-document summarizers. To this end we built a corpus consisting of (a) 100 Million automatic summaries using six summarizers and baselines at ten summary lengths in both English and Chinese, (b) more than 10,000 manual abstracts and extracts, and (c) 200 Million automatic document and summary retrievals using 20 queries. We present both qualitative and quantitative results showing the strengths and drawbacks of all evaluation methods and how they rank the different summarizers.</Paragraph> </Section> class="xml-element"></Paper>