File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/relat/04/w04-1010_relat.xml

Size: 1,688 bytes

Last Modified: 2025-10-06 14:15:46

<?xml version="1.0" standalone="yes"?>
<Paper uid="W04-1010">
  <Title>Template-Filtered Headline Summarization</Title>
  <Section position="3" start_page="0" end_page="0" type="relat">
    <SectionTitle>
2 Related Work
</SectionTitle>
    <Paragraph position="0"> Several previous systems were developed to address the need for headline-style summaries.</Paragraph>
    <Paragraph position="1"> A lossy summarizer that 'translates' news stories into target summaries using the 'IBM-style' statistical machine translation (MT) model was shown in (Banko, et al., 2000). Conditional probabilities for a limited vocabulary and bigram transition probabilities as headline syntax approximation were incorporated into the translation model. It was shown to have worked surprisingly well with a stand-alone evaluation of quantitative analysis on content coverage. The use of a noisy-channel model and a Viterbi search was shown in another MT-inspired headline summarization system (Zajic, et al., 2002). The method was automatically evaluated by BiLingual Evaluation Understudy (Bleu) (Papineni, et al., 2001) and scored 0.1886 with its limited length model.</Paragraph>
    <Paragraph position="2"> A nonstatistical system, coupled with linguistically motivated heuristics, using a parse-and-trim approach based on parse trees was reported in (Dorr, et al., 2003). It achieved 0.1341 on Bleu with an average of 8.5 words.</Paragraph>
    <Paragraph position="3"> Even though human evaluations were conducted in the past, we still do not have sufficient material to perform a comprehensive comparative evaluation on a large enough scale to claim that one method is superior to others.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML