File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/06/w06-0506_concl.xml
Size: 1,885 bytes
Last Modified: 2025-10-06 13:55:30
<?xml version="1.0" standalone="yes"?> <Paper uid="W06-0506"> <Title>Sydney, July 2006. c(c)2006 Association for Computational Linguistics Taxonomy Learning using Term Specificity and Similarity</Title> <Section position="12" start_page="46" end_page="47" type="concl"> <SectionTitle> LC </SectionTitle> <Paragraph position="0"> and other similarity measures. Recall and F-measure of similarity measures We generated four taxonomies, T , using four taxonomy learning methods: term co-occurring method, CSM method, FCA method and our method. We applied Spec null in/adj in specificity measuring and Sim in/varg in similarity calculation because they showed the highest F-measure. In our method, the most probable one term was selected as hypernym of newly inserted term in each learning step. Figure 6 shows variations of lexical recall, precision and F-measure of four methods as threshold changes. Threshold in each method represent different information to each other. showed the highest lexical recall. Lexical recall is tightly related to recall in similarity measures. Sim in/varg showed the highest recall in similarity measures. T fca and T csm showed higher precision than other taxonomies. It is assumed that precision of taxonomy depends on the precision of specificity measures and the CC of similarity measures. In actual case, Sim varg showed the most plausible curve in CC and Spec null adj showed the highest precision in specificity. Verb-argument relation and adjective-term relation are used in FCA and CSM methods respectively. T spec/sim and T coldoc showed higher F-measure curve than other two taxonomies due to high lexical recall. Although our method showed plausible F-measure, it showed the lowest precision. So other combination of similarity and specificity measures are needed to improve precision of learned taxonomy.</Paragraph> </Section> class="xml-element"></Paper>