File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/05/w05-1205_concl.xml

Size: 1,760 bytes

Last Modified: 2025-10-06 13:55:02

<?xml version="1.0" standalone="yes"?>
<Paper uid="W05-1205">
  <Title>Recognizing Paraphrases and Textual Entailment using Inversion Transduction Grammars</Title>
  <Section position="8" start_page="29" end_page="29" type="concl">
    <SectionTitle>
6 Conclusion
</SectionTitle>
    <Paragraph position="0"> The most serious omission in our experiments with Bracketing ITG models was the absence of any thesaurus model, allowing zero lexical variation between the two strings of a candidate paraphrase pair (or Text and Hypothesis, in the case of textual entailment recognition).</Paragraph>
    <Paragraph position="1"> This forced the models to rely entirely on the Bracketing ITG's inherent tendency to optimize structural match between hypothesized nested argument-head substructures.</Paragraph>
    <Paragraph position="2"> What we find highly interesting is the perhaps surprisingly large effect obtainable from this structure matching bias alone, which already produces good results on paraphrasing as well as a number of the RTE subsets.</Paragraph>
    <Paragraph position="3"> We plan to remedy the absence of a thesaurus as the obvious next step. This can be expected to raise performance significantly on all subsets.</Paragraph>
    <Paragraph position="4"> Wu and Fung (2005) also discuss how to obtain any desired tradeoff between precision and recall. This would be another interesting direction to pursue in the context of recognizing paraphrases or textual entailment.</Paragraph>
    <Paragraph position="5"> Finally, using the development sets to train the parameters of the Bracketing ITG model would improve performance. It would only be feasible to tune a few basic parameters, however, given the small size of the development sets.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML