File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/05/h05-1094_concl.xml

Size: 1,369 bytes

Last Modified: 2025-10-06 13:54:33

<?xml version="1.0" standalone="yes"?>
<Paper uid="H05-1094">
  <Title>Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing (HLT/EMNLP), pages 748-754, Vancouver, October 2005. c(c)2005 Association for Computational Linguistics Composition of Conditional Random Fields for Transfer Learning</Title>
  <Section position="9" start_page="752" end_page="753" type="concl">
    <SectionTitle>
8 Conclusion
</SectionTitle>
    <Paragraph position="0"> In this paper we have shown that joint decoding improves transfer between interdependent NLP tasks, even when the old task is named-entity recognition, for which highly accurate systems exist. The rich features afforded by a  conditionalmodelallowthenewtasktoinfluencethepre- null dictionsoftheoldtask,aneffectthatisonlypossiblewith joint decoding.</Paragraph>
    <Paragraph position="1"> It is now common for researchers to publicly release trained models for standard tasks such as part-of-speech tagging, named-entity recognition, and parsing. This paperhasimplicationsforhowsuchstandardtoolsarepack- null aged. Our results suggest that off-the-shelf NLP tools will need not only to provide a single-best prediction, but alsotobeengineeredsothattheycaneasilycommunicate distributions over predictions to models for higher-level tasks.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML