File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/00/w00-0601_concl.xml

Size: 1,704 bytes

Last Modified: 2025-10-06 13:52:51

<?xml version="1.0" standalone="yes"?>
<Paper uid="W00-0601">
  <Title>Reading Comprehension Programs in a Statistical-Language-Processing Class*</Title>
  <Section position="6" start_page="4" end_page="4" type="concl">
    <SectionTitle>
4 Conclusion
</SectionTitle>
    <Paragraph position="0"> We have briefly discussed several reading comprehension systems that are able to improve on the results of \[3\]. While these are positive results, many of the lessons learned in this exercise are more negative. In particular, while the NE data clearly helped a few percent, most of the extra syntactic and semantic annotations (i.e., parsing and coreference) were either of very small utility, or their utility came about in idiosyncratic ways. For example, probably the biggest impact of the parsing data was that it allowed people to experiment with the bag-of-verbs technique. Also, the parse trees served as the language for describing very question specific techniques, such as the ones for &amp;quot;where&amp;quot; questions presented in the previous section.</Paragraph>
    <Paragraph position="1"> Thus our tentative conclusion is that we are still not at a point that a task like children's reading comprehension tests is a good testing ground for NLP techniques. To the extent that these standard techniques are useful, it seems to be only in conjunction with other methods that are more directly aimed at the task.</Paragraph>
    <Paragraph position="2"> Of course, this is not to say that someone else will not come up with better syntactic/semantic annotations that more directly lead to improvements on such tests. We can only say that so far we have not been able to do so.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML