File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/06/p06-1113_intro.xml

Size: 3,960 bytes

Last Modified: 2025-10-06 14:03:36

<?xml version="1.0" standalone="yes"?>
<Paper uid="P06-1113">
  <Title>Question Answering with Lexical Chains Propagating Verb Arguments</Title>
  <Section position="3" start_page="0" end_page="897" type="intro">
    <SectionTitle>
1 Introduction
</SectionTitle>
    <Paragraph position="0"> In Question Answering the correct answer can be formulated with different but related words than the question. Connecting the words in the question with the words in the candidate answer is not enough to recognize the correct answer. For example the following question from TREC 2004 (Voorhees, 2004): Q: (boxer Floyd Patterson) Who did he beat to win the title? has the following wrong answer: WA: He saw Ingemar Johanson knock down Floyd Patterson seven times there in winning the heavyweight title.</Paragraph>
    <Paragraph position="1"> Although the above sentence contains the words Floyd, Patterson, win, title, and the verb beat can be connected to the verb knock down using lexical chains from WordNet, this sentence does not answer the question because the verb arguments are in the wrong position. The proposed answer describes Floyd Patterson as being the object/patient of the beating event while in the question he is the subject/agent of the similar event. Therefore the selection of the correct answer from a list of candidate answers requires the check of additional constraints including the match of verb arguments. Previous approaches to answer ranking, used syntactic partial matching, syntactic and semantic relations and logic forms for selecting the correct answer from a set of candidate answers. Tanev et al. (Tanev et al., 2004) used an algorithm for partial matching of syntactic structures. For lexical variations they used a dependency based thesaurus of similar words (Lin, 1998). Hang et al. (Cui et al., 2004) used an algorithm to compute the similarity between dependency relation paths from a parse tree to rank the candidate answers.</Paragraph>
    <Paragraph position="2"> In TREC 2005, Ahn et al. (Ahn et al., 2005) used Discourse Representation Structures (DRS) resembling logic forms and semantic relations to represent questions and answers and then computed a score &amp;quot;indicating how well DRSs match each other&amp;quot;. Moldovan and Rus (Moldovan and Rus, 2001) transformed the question and the candidate answers into logic forms and used a logic prover to determine if the candidate answer logic form (ALF) entails the question logic form(QLF).</Paragraph>
    <Paragraph position="3"> Continuing this work Moldovan et al. (Moldovan et al., 2003) built a logic prover for Question Answering. The logic prover uses a relaxation module that is used iteratively if the proof fails at the price of decreasing the score of the proof. This logic prover was improved with temporal context detection (Moldovan et al., 2005).</Paragraph>
    <Paragraph position="4"> All these approaches superficially addressed verb lexical variations. Similar meanings can be expressed using different verbs that use the same arguments in different positions. For example the sentence:  John bought a cowboy hat for $50 can be reformulated as: John paid $50 for a cowboy hat.</Paragraph>
    <Paragraph position="5"> The verb buy entails the verb pay however the arguments a cowboy hat and $50 have different position around the verb.</Paragraph>
    <Paragraph position="6"> This paper describes the approach for propagating the arguments from one verb to another using lexical chains derived using WordNet (Miller, 1995). The algorithm uses verb argument structures created from VerbNet syntactic patterns (Kipper et al., 2000b).</Paragraph>
    <Paragraph position="7"> Section 2 presents VerbNet syntactic patterns and the machine learning approach used to increase the coverage of verb senses. Section 3 describes the algorithms for propagating verb arguments. Section 4 presents the results and the final section 5 draws the conclusions.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML