File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/97/w97-0803_metho.xml
Size: 17,594 bytes
Last Modified: 2025-10-06 14:14:44
<?xml version="1.0" standalone="yes"?> <Paper uid="W97-0803"> <Title>Extending a thesaurus by classifying words</Title> <Section position="5" start_page="16" end_page="16" type="metho"> <SectionTitle> 2 Core thesaurus </SectionTitle> <Paragraph position="0"> Bunruigoihy$ (BGH for short) \[Hayashi, 1966\] is a typical Japanese thesaurus, which has been used for much NLP research on Japanese. BGH includes 87,743 words, each of which is assigned an 8 digit class code. Some words are assigned more than one class code. The coding system of BGH has a hierarchical structure, that is, the first digit represents the part(s) of speech of the word (1: noun, 2:verb, 3: adjective, 4: others), and the second digit classifies words sharing the same first digit and so on. Thus BGH can be considered as four trees, each of which has 8 levels in depth (see figure 1), with each leaf as a set of words.</Paragraph> <Paragraph position="1"> This paper focuses on classifying only nouns in terms of a class code based on the first 5 digits, namely, up to the fifth level of the noun tree. Table 1 shows the number of words (#words) and the number of 5 digit class codes (#classes) with respect to each part of speech.</Paragraph> </Section> <Section position="6" start_page="16" end_page="16" type="metho"> <SectionTitle> 3 Co-occurrence data </SectionTitle> <Paragraph position="0"> Appropriate word classes for a new word are identified based on the probability that the word belongs to different word classes. This probability is calculated based on co-occurrences of nouns and verbs. The co-occurrences were extracted from the RWC text base RWC-DB-TEXT-95-1 \[Real World Computing Partnership, 1995\]. This text base consists of 4 years worth of Mainiti Shimbun \[Mainichi Shimbun, 1991-1994\] newspaper articles, which have been automatically annotated with morphological tags. The total number of morphemes is about 100 million. Instead of conducting full parsing on the texts, several heuristics were used in order to obtain dependencies between nouns and verbs in the form of tuples (frequency, noun, postposition, verb).</Paragraph> <Paragraph position="1"> Among these tuples, only those which include the post-position &quot;WO&quot; (typically marking accusative case) were used. Further, tuples containing nouns in BGH were selected. In the case of a compound noun, the noun was transformed into the maximal leftmost string contained in BGH 1. As a result, 419,132 tuples remained including 23,223 noun types and 9,151 verb types. These were used in the experiments described in section 5.</Paragraph> </Section> <Section position="7" start_page="16" end_page="17" type="metho"> <SectionTitle> 4 Identifying appropriate word classes </SectionTitle> <Paragraph position="0"/> <Section position="1" start_page="16" end_page="17" type="sub_section"> <SectionTitle> 4.1 Probabilistic model </SectionTitle> <Paragraph position="0"> The probabilistic model used in this paper is the SVMV model \[Iwayama and Tokunaga, 1994\]. This model was originally developed for document categorization, in which a new document is classified into certain predefined categories. For the purposes of this paper, a new word (noun) not appearing in the thesaurus is treated as a new document, and a word class in the thesaurus corresponds to a predefined document category. Each noun is represented by a set of verbs co-occurring with that noun. The probability P(c, Iw) is calculated for each word class c,, and the proper classes for a word w are determined based on it. The SVMV model formalizes the probability P(clw ) as follows.</Paragraph> <Paragraph position="1"> Conditioning P(clw ) on each possible event gives</Paragraph> <Paragraph position="3"> Assuming conditional independence between c and V = v, given w, that is P(clw, V = %) = P(clV = v,), we</Paragraph> <Paragraph position="5"> Using Bayes' theorem, this becomes</Paragraph> <Paragraph position="7"> All the probabilities in (3) can be estimated from training data based on the following equations. In the following, fr(w, v) denotes the frequency that a noun w and a verb v are co-occurring.</Paragraph> <Paragraph position="9"> tracted verb co-occurring with a noun is v~, given that the noun belongs to word class c. This is estimated from the relative frequency of v~ co-occurring with the nouns in word class c, namely,</Paragraph> <Paragraph position="11"> tracted verb co-occurring with a noun w is vs. This is estimated from the relative frequency of v, co-occurring with noun w, namely,</Paragraph> <Paragraph position="13"> tracted verb co-occurring with a randomly selected noun is v~. This is estimated from the relative frequency of v~ in the whole training data, namely, (6) P(v P(c) is the prior probability that a randomly selected noun belongs to c. This is estimated from the relative frequency of a verb co-occurring with any noun in class c 2, namely,</Paragraph> <Paragraph position="15"/> </Section> <Section position="2" start_page="17" end_page="17" type="sub_section"> <SectionTitle> 4.2 Searching through the thesaurus </SectionTitle> <Paragraph position="0"> As is documented by the fact that we employ the probabilistic model used in document categorization, classifying words in a thesaurus is basically the same as document categorization 3. Document categorization strategies can be summarized according to the following three types \[Iwayama and Tokunaga, 1995\].</Paragraph> <Paragraph position="1"> * the k-nearest neighbor (k-nn) or Memory based reasoning (MBR) approach * the category-based approach * the cluster-based approach The k-nn approach searches for the k documents most similar to a target document in training data, and assigns that category with the highest distribution in the k documents \[Weiss and Kulikowski, 1991\]. Although the 2This calculation seems be counterintuitive. A more straight-forward calculation would be one based on the relative frequency of words belonging to class c. However, the given estimation is necessary in order to normalize the sum of the probabilities P(clw ) to one.</Paragraph> <Paragraph position="2"> 3As Uramoto mentioned, this task is also similar to word sense disambiguation except for the size of search space \[Uramoto, 1996\]. k-nn approach has been promising for document categorization \[Masand et al., 1992\], it requires significant computational resources to calculate the similarity between a target document and every document in training data.</Paragraph> <Paragraph position="3"> In order to overcome the drawback of the k-nn approach, the category-based approach first makes a cluster for each category consisting of documents assigned the same category, then calculates the similarity between a target document and each of these document clusters.</Paragraph> <Paragraph position="4"> The number of similarity calculations can be reduced to the number of clusters (categories), saving on computational resources.</Paragraph> <Paragraph position="5"> Another alternative is the cluster-based approach, which first constructs clusters from training data by using some clustering algorithm, then calculates similarities between a target document and those clusters. The main difference between category-based and cluster-based approaches resides in the cluster construction. The former uses categories which have been assigned to documents when constructing clusters, while the latter does not. In addition, clusters are structured in a tree when a hierarchical clustering algorithm is used for the latter approach. In this case, one can adopt a top-down tree search strategy for similar clusters, saving further computational overhead.</Paragraph> <Paragraph position="6"> In this paper, all these approaches are evaluated for word classification, in which a target document corresponds to a target word and a document category corresponds to a thesaurus class code.</Paragraph> </Section> </Section> <Section position="8" start_page="17" end_page="401" type="metho"> <SectionTitle> 5 Experiments </SectionTitle> <Paragraph position="0"> In our experiments, the 23,223 nouns described in section 3 were classified in terms of the core thesaurus, BGH, using the three search strategies described in the previous section. Classification was conducted for each strategy as follows.</Paragraph> <Paragraph position="1"> k-nn Each noun is considered as a singleton cluster, and the probability that a target noun is classified into each of the non-target noun clusters is calculated.</Paragraph> <Paragraph position="2"> category-based 10-fold cross validation was conducted for the category-based and cluster-based strategies, in that, 23,223 nouns were randomly divided into 10 groups, and one group of nouns was used for test data while the rest was used for training. The test group was rotated 10 times, and therefore, all nouns were used as a test case. The results were averaged over these 10 trials. Each noun in the training data was categorized according to its BGH 5 digit class code, generating 544 category clusters (see Table 1).</Paragraph> <Paragraph position="3"> The probability of each noun in the test data being classified into each of these 544 cluster was calculated. null cluster-based In the case of the category-based approach, each noun in the training data was categorized into the leaf clusters of the BGH tree, that is, the 5 digit class categories 4. For the cluster-based approach, the nouns were also categorized into the intermediate class categories, that is, the 2 to 4 digit class categories. Since we use the BGH hierarchy structure instead of constructing a duster hierarchy from scratch, in a strict sense, this does not coincide with the cluster-based approach as described in the previous section. However, searching through the BGH tree structure in a top down manner still enables us to save greatly on computational resources. A simple top down search, in which the cluster with the highest probability is followed at each level, allows only one path leading to a single leaf (5 digit class code). In order to take into account multiple word senses, we followed several paths at the same time. More precisely, the difference between the probability of each cluster and the highest probability value for that level was calculated, and clusters for which the difference was within a certain threshold were left as candidate paths. The threshold was set to 0.2 in this experiments.</Paragraph> <Paragraph position="4"> The performance of each approach was evaluated on the basis of the number of correctly assigned class codes. Tables 2 to 4 show the results of each approach. Columns show the maximum number of class codes assigned to each target word. For example, the column &quot;10&quot; means that a target word is assigned to up to 10 class codes.</Paragraph> <Paragraph position="5"> If the correct class code is contained in these assigned codes, the test case is considered to be assigned the correct code. Rows show the distribution word numbers on the basis of occurrence frequencies in the training data. Each value in the table is the number of correct cases with its percentage in the parentheses.</Paragraph> <Paragraph position="6"> Table 2 Results for the k-nn approach 4Note that we ignore lower digits, and therefore, lea\] means the categories formed by 5 digit class code.</Paragraph> <Paragraph position="7"> Table 3 Results for the category-based approach total 5,638 7,058 8,203 8,692 23,223 (24.3) (30.4) (35.3) (37.4)</Paragraph> </Section> <Section position="9" start_page="401" end_page="401" type="metho"> <SectionTitle> 6 Discussion </SectionTitle> <Paragraph position="0"> Overall, the category-based approach shows the best performance, followed by the cluster-based approach, k-nn shows the worst performance. This result contradicts past research \[Iwayama and Tokunaga, 1995; Masand et al., 1992\]. One possible explanation for this contradiction may be that the basis of the classification for BGH and our probabilistic model is very different. In other words, co-occurrences with verbs may not have captured the classification basis of BGH very well.</Paragraph> <Paragraph position="1"> The performance of k-nn is noticeably worse than that of the others for low frequent words. This may be due to data sparseness. Generalizing individual nouns by constructing clusters remedies this problem.</Paragraph> <Paragraph position="2"> When b is small, namely only categories with high probabilities are assigned, the category-based and duster-based approaches show comparable performance.</Paragraph> <Paragraph position="3"> When k becomes bigger, however, the category-based approach becomes superior. Since a beam search was adopted for the cluster-based approach, there was a possibility of falling to follow the correct path.</Paragraph> </Section> <Section position="10" start_page="401" end_page="401" type="metho"> <SectionTitle> 7 Related work </SectionTitle> <Paragraph position="0"> The goal of this paper is the same as that for Uramoto \[Uramoto, 1996\], that is, identifying appropriate word classes for an unknown word in terms of an existing thesaurus. The significant difference between Uramoto and our research can be summarized as follows.</Paragraph> <Paragraph position="1"> * The core thesaurus is different. Uramoto used ISAMAP \[Tanaka and Nisina, 1987\], which contains about 4,000 words.</Paragraph> <Paragraph position="2"> * We adopted a probabilistic model, which has a sounder foundation than the Uramoto's. He used several factors, such as similarity between a target word and words in each classes, class levels and so forth. These factors are combined into a score by calculating their weighted sum. The weight for each factor is determined by using held out data.</Paragraph> <Paragraph position="3"> * We restricted our co-occurrence data to that included the &quot;WO&quot; postposition, which typically marks the accusative case, while Uramoto used several grammatical relations in tandem. There are claims that words behave differently depending on their grammatical role, and that they should therefore be classified into different word classes when the role is different \[Tokunaga et al., 1995\]. This viewpoint should be taken into account when we construct a thesaurus from scratch. In our case, however, since we assume a core thesaurus, there is room for argument as to whether we should consider this claim. Further investigation on this point is needed.</Paragraph> <Paragraph position="4"> * Our evaluation scheme is more rigid and based on a larger dataset. We conducted cross validation on nouns appearing in BGH and the judgement of correctness was done automatically, while Uramoto used unknown words as test cases and decided the correctness on a subjective basis. The number of his test cases was 250, ours is 23223. The performance of his method was reported to be from 65% to 85% in accuracy, which seems better than ours. However, it is difficult to compare these two in an absolute sense, because both the evaluation data and code assignment scheme are different. We identified class codes at the fifth level of BGH, while Uramoto searched for a set of class codes at various levels.</Paragraph> <Paragraph position="5"> Nakano proposed a method of assigning a BGH class code to new words \[Nakano, 1981\]. His approach is very different from ours and Uramoto's. He utilized characteristics of Japanese character classes. There are three character classes used in writing Japanese, Kanzi, Hira-gana and Katakana. A Kanzi character is an ideogram and has a distinct stand-alone meaning, to a certain extent. On the other hand, Hiragana and Katakana characters are phonograms. Nakano first constructed a Kanzi meaning dictionary from BGH by extracting words including a single Kanzi character. He defined the class code of each Kanzi character to the code of words including only that Kanzi. He then assigned class codes to new words based on this Kanzi meaning dictionary. For example, if the class codes of Kanzi Ks and K s are ~1, c~2} and {c31 , c32 , c~3} respectively, then a word including K, and K~ is assigned the codes {Ctl,Cs2,C31,C32,C33 }. We applied Nakano's method on the data used in section 55, obtaining the accuracy of 54.6% for 17,736 words. The average number of codes assigned was 5.75. His method has several advantages over ours, such as: * no co-occurrence data is required, * not so much computational overhead is required.</Paragraph> <Paragraph position="6"> However, there are obvious limitations, such as: * it can not handle words not including Kanzi, * ranking or preference of assigned codes is not obtained, null * not applicable to languages other than Japanese.</Paragraph> <Paragraph position="7"> We investigated the overlap of words that were assigned correct classes for our category-based method and Nakano's method. The parameter k was set to 30 for our method. The number of words that were assigned correct classes by both methods was 5,995, which represents 46% of the words correctly classified by our method and 62% of the words correctly classified by Nakano's method. In other words, of the words correctly classifted by one method, only about half can also be also classified correctly by the other method. This result suggests that these two methods are complementary to each other, rather than competitive, and that the overall performance can be improved by combining them.</Paragraph> </Section> class="xml-element"></Paper>