File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/abstr/03/w03-0422_abstr.xml

Size: 2,427 bytes

Last Modified: 2025-10-06 13:42:59

<?xml version="1.0" standalone="yes"?>
<Paper uid="W03-0422">
  <Title>Learning a Perceptron-Based Named Entity Chunker via Online Recognition Feedback</Title>
  <Section position="1" start_page="0" end_page="0" type="abstr">
    <SectionTitle>
1 Introduction
</SectionTitle>
    <Paragraph position="0"> We present a novel approach for the problem of Named Entity Recognition and Classification (NERC), in the context of the CoNLL-2003 Shared Task.</Paragraph>
    <Paragraph position="1"> Our work is framed into the learning and inference paradigm for recognizing structures in Natural Language (Punyakanok and Roth, 2001; Carreras et al., 2002). We make use of several learned functions which, applied at local contexts, discriminatively select optimal partial structures. On the top of this local recognition, an inference layer explores the partial structures and builds the optimal global structure for the problem.</Paragraph>
    <Paragraph position="2"> For the NERC problem, the structures to be recognized are the named entity phrases (NE) of a sentence. First, we apply learning at word level to identify NE candidates by means of a Begin-Inside classification. Then, we make use of functions learned at phrase level --one for each NE category-- to discriminate among competing NEs.</Paragraph>
    <Paragraph position="3"> We propose a simple online learning algorithm for training all the involved functions together. Each function is modeled as a voted perceptron (Freund and Schapire, 1999). The learning strategy works online at sentence level. When visiting a sentence, the functions being learned are first used to recognize the NE phrases, and then updated according to the correctness of their solution. We analyze the dependencies among the involved perceptrons and a global solution in order to design a global update rule based on the recognition of namedentities, which reflects to each individual perceptron its committed errors from a global perspective.</Paragraph>
    <Paragraph position="4"> The learning approach presented here is closely related to -and inspired by- some recent works in the area of NLP and Machine Learning. Collins (2002) adapted the perceptron learning algorithm to tagging tasks, via sentence-based global feedback. Crammer and Singer (2003) presented an online topic-ranking algorithm involving several perceptrons and ranking-based update rules for training them.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML