File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/06/p06-2110_concl.xml
Size: 1,864 bytes
Last Modified: 2025-10-06 13:55:24
<?xml version="1.0" standalone="yes"?> <Paper uid="P06-2110"> <Title>Word Vectors and Two Kinds of Similarity</Title> <Section position="9" start_page="863" end_page="864" type="concl"> <SectionTitle> 7 Conclusion </SectionTitle> <Paragraph position="0"> Through two simulation experiments, we obtained the following findings: * The dictionary-based word vectors better reflect the knowledge of taxonomic similarity, while the LSA-based and the cooccurrence-based word vectors better reflect the knowledge of associative similarity. In particular, the cooccurrence-based vectors are useful for representing associative similarity.</Paragraph> <Paragraph position="1"> * The dictionary-based vectors yielded better performance in synonym judgment, but the LSA-based vectors showed better performance in antonym judgment.</Paragraph> <Paragraph position="2"> * These kinds of word vectors showed the distinctive patterns of the relationship between the number of dimensions of word vectors and their performance.</Paragraph> <Paragraph position="3"> We are now extending this work to examine in more detail the relationship between various kinds of word vectors and the quality of word similarity involved in these vectors. It would be interesting for further work to develop a method for extracting the knowledge of a specific similarity from the word space, e.g., extracting the knowledge of taxonomic similarity from the dictionary-based word space. Vector negation (Widdows, 2003) may be a useful technique for this purpose. At the same time we are also interested in a method for combining different word spaces into one space, e.g., combining the dictionary-based and the LSA-based spaces into one coherent word space. Additionally we are trying to simulate cognitive processes such as metaphor comprehension (Utsumi, 2006).</Paragraph> </Section> class="xml-element"></Paper>