File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/05/p05-2004_concl.xml

Size: 1,530 bytes

Last Modified: 2025-10-06 13:54:44

<?xml version="1.0" standalone="yes"?>
<Paper uid="P05-2004">
  <Title>Jointly Labeling Multiple Sequences: A Factorial HMM Approach</Title>
  <Section position="7" start_page="22" end_page="23" type="concl">
    <SectionTitle>
6 Conclusion and Future Work
</SectionTitle>
    <Paragraph position="0"> We have demonstrated that joint labeling with an FHMM can outperform the traditional approach of cascading tagging and chunking in NLP. The new Switching FHMM generalizes the FHMM by allow- null ing dynamically changing generative models and is a promising approach for modeling the type of interactions between hidden state sequences.</Paragraph>
    <Paragraph position="1"> Three directions for future research are planned: First, we will augment the FHMM such that its accuracies are competitive with state-of-the-art taggers and chunkers. This includes adding word features to improve accuracy on OOV words, augmenting the context from bigram to trigram, and applying advanced smoothing techniques. Second, we plan to examine the Switching FHMM further, especially in terms of automatic construction of the a and b function. A promising approach is to learn the mappings using decision trees or random forests, which has recently achieved good results in a similar problem in language modeling (Xu and Jelinek, 2004). Finally, we plan to integrate the tagger/chunker in an end-to-end system, such as a Factored Language Model (Bilmes and Kirchhoff, 2003), to measure the over-all merit of joint labeling.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML