File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/evalu/96/c96-1061_evalu.xml

Size: 3,725 bytes

Last Modified: 2025-10-06 14:00:22

<?xml version="1.0" standalone="yes"?>
<Paper uid="C96-1061">
  <Title>Using Discourse Predictions for Ambiguity Resolution</Title>
  <Section position="6" start_page="361" end_page="361" type="evalu">
    <SectionTitle>
5 Evaluation
</SectionTitle>
    <Paragraph position="0"> Both combination methods, the genetic programming approach and the neural net approach, were trained on a set of 15 Spanish scheduling dialogues. They were both tested on a set of five previously unseen dialogues. Only sentences with multiple ILTs, at least one of which was correct, were used as training and testing data. Altogether 115 sentences were used for training and 76 for testing.</Paragraph>
    <Paragraph position="1"> We evaluated the performance of our two methods by comparing them to two non context-based ones: a baseline method of selecting a parse randomly, and a Statistical Parse Disambiguation method. The Statistical Parse Disambiguation method makes use of the three non context-based scores described in Section 3. The two context-based approaches combine the three non context-based scores as well as the three context-based scores, namely the focusing flag, the focusing score, and the graded constraint score.</Paragraph>
    <Paragraph position="2"> Table 2 reports the percentages of ambiguous sentences correctly disambiguated by each method. We present two types of performance statistics on the testing set: without cumulative error Testing without CE and with cumulative error Testing with CE. Cumulative error builds up when an incorrect hypothesis is chosen and incorporated into the discourse context, causing future predictions based on discourse context to be inaccurate. Notice that for the two non context-based approaches, the performance figures for</Paragraph>
  </Section>
  <Section position="7" start_page="361" end_page="362" type="evalu">
    <SectionTitle>
6 Conclusions
</SectionTitle>
    <Paragraph position="0"> In this article we have discussed how we apply predictions from our plan-based discourse processor to the problem of disambiguation. Our evaluation demonstrates the advantage of incorporating context-based predictions into a purely non context-based approach. While our results indicate that we have not solved the whole problem of combining non context- and context-based predictions for disambiguation, they show that the discourse processor is making usefld predictions and that we have combined this information successflllly with the non context-based predictors.</Paragraph>
    <Paragraph position="1"> Our current efforts are aimed at solving the cumulative error problem in using discourse context.</Paragraph>
    <Paragraph position="2"> We noticed that cumulative error is especially a problem in spontaneous speech systems where unexpected inpnt, disfluencies, out-of-domain sentences and missing information cause the deterio:ration of the quality of context. One possibility is to reassess and reestablish the context state when a conflict is detected between context and other predictions. A second proposal is to keep the n-best hypotheses and to choose one only after having processed a sequence of inputs. Preliminary experiments show that both t)roposals help reduce the adverse effect of the cumulative error problem.</Paragraph>
    <Paragraph position="3"> Our results also suggest another possible avenue of future development. Instead of trying to learn a general function for combining various information sources, we could decide which source of information to trust in a particular case and classify  the type of ambiguity at ti;md with the best ap1)ro~tch for thL&lt;s ~mil)iguity. This could be ace, omplished, for exa3nl)h; , with a decision tree le~trning</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML