File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/05/h05-1032_concl.xml
Size: 2,816 bytes
Last Modified: 2025-10-06 13:54:31
<?xml version="1.0" standalone="yes"?> <Paper uid="H05-1032"> <Title>Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing (HLT/EMNLP), pages 249-256, Vancouver, October 2005. c(c)2005 Association for Computational Linguistics Bayesian Learning in Text Summarization</Title> <Section position="6" start_page="255" end_page="255" type="concl"> <SectionTitle> 5 Concluding Remarks </SectionTitle> <Paragraph position="0"> The paper showed how it is possible to incorporate information on human judgments for text summarization in a principled manner through Bayesian modeling, and also demonstrated how the approach leverages performance of a summarizer, using data collected from human subjects.</Paragraph> <Paragraph position="1"> The present study is motivated by the view that that summarization is a particular form of collaborative filtering (CF), wherein we view a summary as a particular set of sentences favored by a particular user or a group of users just like any other things people would normally have preference for, such as CDs, books, paintings, emails, news articles, etc. Importantly, under CF, we would not be asking, what is the 'correct' or gold standard summary for document X? - the question that consumed much of the past research on summarization. Rather, what we are asking is, what summary is popularly favored for X? Indeed the fact that there could be as many summaries as angles to look at the text from may favor in general how to best set l requires some experimenting with data and the optimal value may vary from domain to domain. An interesting approach would be to empirically optimize l using methods suggested in MacKay and Peto (1994).</Paragraph> <Paragraph position="2"> 10Incidentally, summarizers, Bayesian or not, perform considerably better on G3K3 than on G1K3 or G2K3. This happens presumably because a large portion of votes concentrate in a rather small region of text there, a property any classifier should pick up easily.</Paragraph> <Paragraph position="3"> the CF view of summary: the idea of what constitutes a good summary may vary from person to person, and may well be influenced by particular interests and concerns of people we elicit data from.</Paragraph> <Paragraph position="4"> Among some recent work with similar concerns, one notable is the Pyramid scheme (Nenkova and Passonneau, 2004) where one does not declare a particular human summary a absolute reference to compare summaries against, but rather makes every one of multiple human summaries at hand bear on evaluation; Rouge (Lin and Hovy, 2003) represents another such effort. The Bayesian summarist represents yet another, whereby one seeks a summary most typical of those created by humans.</Paragraph> </Section> class="xml-element"></Paper>