File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/evalu/02/w02-0213_evalu.xml

Size: 2,318 bytes

Last Modified: 2025-10-06 13:58:53

<?xml version="1.0" standalone="yes"?>
<Paper uid="W02-0213">
  <Title>Dialogue Act Recognition with Bayesian Networks for Dutch Dialogues</Title>
  <Section position="6" start_page="0" end_page="0" type="evalu">
    <SectionTitle>
4.2 Results and evaluation
</SectionTitle>
    <Paragraph position="0"> In this experiment the data are used for learning both structure and conditional probabilities of a Bayesian network. We have used an implementation of the K2 algorithm (Cooper and Herskovits, 1992) to generate the network structure and then - like in the SCHISMA experiment - used MAP to assess the conditional probability distributions.</Paragraph>
    <Paragraph position="1"> Starting from the small corpus of navigation dialogues, a procedure has been planned to iteratively enlarge the corpus: given the annotated corpus, derive a network, use the network in a dialogue system, test the network and add these dialogues - with the corrected backward- and forward-looking functions - to the corpus. This results in a more extended set of annotated dialogues. And we start again. After each of the cycles we compare the results (in terms of accuracies) with the results of the previous cycle. This should give more insight in the usefulness of the features and values chosen for the Bayesian network. After deciding to adapt the set of features we automatically annotate the corpus; we derive a new network and we test again.</Paragraph>
    <Paragraph position="2"> The current corpus is too small to expect good results from a generated network, especially if the data are used for learning both the structure and the probability distributions. From the initial corpus of 81 utterances 75% was used for generating a Bayesian network. Testing on the remaining 25% resulted in accuracy of 57.1% for classifying the forward-looking function and 81.0% for classifying the backward-looking function. After this first cycle, new data have been generated interactively, following the procedure described above. The Bayesian network trained from this new data set resulted in the improved accuracies of 76.5% and 88.2% for classifying the forwardand backward-looking function respectively. Following this training and testing procedure, we hope to develop Bayesian networks with increasing performance.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML