File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/05/w05-0106_concl.xml

Size: 2,052 bytes

Last Modified: 2025-10-06 13:54:56

<?xml version="1.0" standalone="yes"?>
<Paper uid="W05-0106">
  <Title>Making Hidden Markov Models More Transparent</Title>
  <Section position="5" start_page="34" end_page="35" type="concl">
    <SectionTitle>
4 Summary and future work
</SectionTitle>
    <Paragraph position="0"> Students (and researchers) need to understand HMMs. We have built a display that allow users to probe different aspects of an HMM and watch Viterbi in action. In addition, our system provides a display that allows users to contrast state sequence probabilities. To drive these displays, we have built a standard HMM system including parameter estimating and decoding and provide a part-of-speech model trained on UPenn Treebank data. The system can also read in models constructed by other systems. null This system was built during this year's offering of Introduction to Computational Linguistics at the University of Iowa. In the Spring of 2006 it will be deployed in the classroom for the first time. We plan on giving a demonstration of the system during a lecture on HMMs and part-of-speech tagging. A related problem set using the system will be assigned. The students will be given several mis-tagged sentences and asked to analyze the errors and report on precisely why they occurred. A survey will be administered at the end and improvements will be made to the system based on the feedback provided.</Paragraph>
    <Paragraph position="1"> In the future we plan to implement Good-Turing smoothing and a method for dealing with unknown words. We also plan to provide an additional display that shows the traditional Viterbi lattice figure, i.e., observations listed left-to-right, possible states listed from top-to-bottom, and lines from left-to-right connecting states at observation index i with the previous states, i-1, that are part of the most likely state sequence to i. Finally, we would like to incorporate an additional display that will provide a visualization of EM HMM training. We will use (Eisner, 2002) as a starting point.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML