File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/05/p05-1049_concl.xml
Size: 2,180 bytes
Last Modified: 2025-10-06 13:54:43
<?xml version="1.0" standalone="yes"?> <Paper uid="P05-1049"> <Title>Word Sense Disambiguation Using Label Propagation Based Semi-Supervised Learning</Title> <Section position="7" start_page="401" end_page="401" type="concl"> <SectionTitle> 5 Conclusion </SectionTitle> <Paragraph position="0"> In this paper we have investigated a label propagation based semi-supervised learning algorithm for WSD, which fully realizes a global consistency assumption: similar examples should have similar labels. In learning process, the labels of unlabeled examples are determined not only by nearby labeled examples, but also by nearby unlabeled examples.</Paragraph> <Paragraph position="1"> Compared with semi-supervised WSD methods in the first and second categories, our corpus based method does not need external resources, including WordNet, bilingual lexicon, aligned parallel corpora. Our analysis and experimental results demonstrate the potential of this cluster assumption based algorithm. It achieves better performance than SVM when only very few labeled examples are available, and its performance is also better than mono-lingual bootstrapping and comparable to bilingual bootstrapping. Finally we suggest an entropy based method to automatically identify a distance measure that can boost the performance of LP algorithm on a given dataset.</Paragraph> <Paragraph position="2"> It has been shown that one sense per discourse property can improve the performance of bootstrapping algorithm (Li and Li, 2004; Yarowsky, 1995). This heuristics can be integrated into LP algorithm by setting weight Wi,j = 1 if the i-th and j-th instances are in the same discourse.</Paragraph> <Paragraph position="3"> In the future we may extend the evaluation of LP algorithm and related cluster assumption based algorithms using more benchmark data for WSD. Another direction is to use feature clustering technique to deal with data sparseness and noisy feature problem. null Acknowledgements We would like to thank anonymous reviewers for their helpful comments.</Paragraph> <Paragraph position="4"> Z.Y. Niu is supported by A*STAR Graduate Scholarship. null</Paragraph> </Section> class="xml-element"></Paper>