File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/03/n03-1004_intro.xml

Size: 3,337 bytes

Last Modified: 2025-10-06 14:01:42

<?xml version="1.0" standalone="yes"?>
<Paper uid="N03-1004">
  <Title>In Question Answering, Two Heads Are Better Than One</Title>
  <Section position="2" start_page="0" end_page="0" type="intro">
    <SectionTitle>
1 Introduction
</SectionTitle>
    <Paragraph position="0"> Traditional question answering (QA) systems typically employ a pipeline approach, consisting roughly of question analysis, document/passage retrieval, and answer selection (see e.g., (Prager et al., 2000; Moldovan et al., 2000; Hovy et al., 2001; Clarke et al., 2001)). Although a typical QA system classifies questions based on expected answer types, it adopts the same strategy for locating potential answers from the same corpus regardless of the question classification. In our own earlier work, we developed a specialized mechanism called Virtual Annotation for handling definition questions (e.g., &amp;quot;Who was Galileo?&amp;quot; and &amp;quot;What are antibiotics?&amp;quot;) that consults, in addition to the standard reference corpus, a structured knowledge source (WordNet) for answering such questions (Prager et al., 2001). We have shown that better performance is achieved by applying Virtual Annotation and our general purpose QA strategy in parallel. In this paper, we investigate the impact of adopting such a multi-strategy and multi-source approach to QA in a more general fashion.</Paragraph>
    <Paragraph position="1"> Our approach to question answering is additionally motivated by the success of ensemble methods in machine learning, where multiple classifiers are employed and their results are combined to produce the final output of the ensemble (for an overview, see (Dietterich, 1997)).</Paragraph>
    <Paragraph position="2"> Such ensemble methods have recently been adopted in question answering (Chu-Carroll et al., 2003b; Burger et al., 2003). In our question answering system, PI-QUANT, we utilize in parallel multiple answering agents that adopt different processing strategies and consult different knowledge sources in identifying answers to given questions, and we employ resolution mechanisms to combine the results produced by the individual answering agents.</Paragraph>
    <Paragraph position="3"> We call our approach multi-strategy since we combine the results from a number of independent agents implementing different answer finding strategies. We also call it multi-source since the different agents can search for answers in multiple knowledge sources. In this paper, we focus on two answering agents that adopt fundamentally different strategies: one agent uses predominantly knowledge-based mechanisms, whereas the other agent is based on statistical methods. Our multi-level resolution algorithm enables combination of results from each answering agent at the question, passage, and/or answer levels. Our experiments show that in most cases our multi-level resolution algorithm outperforms its components, supporting a tightly-coupled design for multi-agent QA systems. Experimental results show significant performance improvement over our single-strategy, single-source baselines, with the best performing multi-level resolution algorithm achieving a 35.0% relative improvement in the number of correct answers and a 32.8% improvement in average precision, on a previously unseen test set.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML