File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/04/p04-2005_metho.xml
Size: 17,929 bytes
Last Modified: 2025-10-06 14:08:59
<?xml version="1.0" standalone="yes"?> <Paper uid="P04-2005"> <Title>Automatic Acquisition of English Topic Signatures Based on a Second Language</Title> <Section position="3" start_page="0" end_page="0" type="metho"> <SectionTitle> 2 Acquisition of Topic Signatures </SectionTitle> <Paragraph position="0"> A topic signature is defined as: TS = f(t1;w1);:::;(ti;wi);:::g, where ti is a term highly correlated to a target topic (or concept) with association weight wi, which can be omitted. The steps we perform to produce the topic signatures are described below, and illustrated in Figure 1.</Paragraph> <Paragraph position="1"> 1. Translate an English ambiguous word w to Chinese, using an English-Chinese lexicon. Given the assumption we mentioned, each sense si of w maps to a distinct Chinese word1. At the end of this step, we have produced a set C, which consists of Chinese words fc1;c2;:::;cng, where ci is the translation corresponding to sense si of w, and n is the number of senses that 1. Chinese document 1 2. Chinese document 2 ... ...</Paragraph> <Paragraph position="2"> 1. {English topic signature 1} 2. {English topic signature 2} ... ...</Paragraph> <Paragraph position="3"> 1. {English topic signature 1} 2. {English topic signature 2} ... ...</Paragraph> <Paragraph position="4"> Figure 1:Process of automatic acquisition of topic signatures. For simplicity, we assume here that w has two senses. 3. Shallow process these Chinese corpora. Text segmentation and POS tagging are done in this step. 4. Either use an electronic Chinese-English lexicon to translate the Chinese corpora word by word to English, or use machine translation software to translate the whole text. In our experiments, we did the former. The complete process is automatic, and unsupervised. At the end of this process, for each sense si of an ambiguous word w, we have a large set of English contexts. Each context is a topic signature, which represents topical information that tends to co-occur with sense si. Note that an element in our topic signatures is not necessarily a single English word. It can be a set of English words which are translations of a Chinese word c. For example, the component of a topic signature, fvesture, clothing, clothesg, is translated from the Chinese word . Under the assumption that the majority of c's are unambiguous, which we discuss later, we refer to elements in a topic signature as concepts in this paper.</Paragraph> <Paragraph position="5"> Choosing an appropriate English-Chinese dictionary is the first problem we faced. The one we decided to use is the Yahoo! Student English-Chinese On-line Dictionary2. As this dictionary is designed for English learners, its sense granularity is far coarser-grained than that of Word-Net. However, researchers argue that the granularity of WordNet is too fine for many applications, and some also proposed new evaluation standards. For example, Resnik and Yarowsky (1999) sug2See: http://cn.yahoo.com/dictionary/ gested that for the purpose of WSD, the different senses of a word could be determined by considering only sense distinctions that are lexicalised cross-linguistically. Our approach is in accord with their proposal, since bilingual dictionaries interpret sense distinctions crossing two languages. For efficiency purposes, we extract our topic signatures mainly from the Mandarin portion of the Chinese Gigaword Corpus (CGC), produced by the LDC3, which contains 1:3GB of newswire text drawn from Xinhua newspaper. Some Chinese translations of English word senses could be sparse, making it impossible to extract sufficient training data simply relying on CGC. In this situation, we can turn to the large amount of Chinese text on the Web. There are many good search engines and on-line databases supporting the Chinese language. After investigation, we chose People's Daily On-line4, which is the website for People's Daily, one of the most influential newspaper in mainland China. It maintains a vast database of news stories, available to search by the public. Among other reasons, we chose this website because its articles have similar quality and coverage to those in the CGC, so that we could combine texts from these two resources to get a larger amount of topic signatures. Note that we can always turn to other sources on the Web to retrieve even more data, if needed.</Paragraph> <Paragraph position="6"> For Chinese text segmentation and POS tagging5 we adopted the freely-available software package -- ICTCLAS6. This system includes a word segmenter, a POS tagger and an unknown-word recogniser. The claimed precision of segmentation is 97:58%, evaluated on a 1:2M word portion of the People's Daily Corpus.</Paragraph> <Paragraph position="7"> To automatically translate the Chinese text back to English, we used the electronic LDC Chinese-English Translation Lexicon Version 3:0. An alternative was to use machine translation software, which would yield a rather different type of resource, but this is beyond the scope of this paper. Then, we filtered the topic signatures with a stop-word list, to ensure only content words are included in our final results.</Paragraph> <Paragraph position="8"> One might argue that, since many Chinese words are also ambiguous, a Chinese word may have more than one English translation and thus translated concepts in topic signatures would still be ambiguous. This happens for some Chinese words, and will inevitably affect the performance of our system to some extent. A practical solution is to expand the queries with different descriptions associated with each sense of w, normally provided in a bilingual dictionary, when retrieving the Chinese text. To get an idea of the baseline performance, we did not follow this solution in our experiments.</Paragraph> <Paragraph position="9"> 1. rate ; 2. bond ; 3. payment; 4. market ; 5. debt ; 6. dollar; 7. bank ; 8. year; 9. loan; 10. income ; 11. company ; 12. inflation; 13. reserve; 14. government; 15. economy ; 16. stock ; 17. fund ; 18. week; 19. security; 20. level; A M 1. { bank }; 2. { loan }; 3. { company , firm, corporation}; 4. { rate }; 5. {deposit}; 6. { income , revenue}; 7. { fund }; 8. {bonus, divident}; 9. {investment}; 10. {market}; 11. {tax, duty}; 12. { economy }; 13. { debt }; 14. {money}; 15. {saving}; 16. {profit}; 17. { bond }; 18. { income , earning}; 19. {share, stock }; 20. {finance, banking}; Topic signatures for the &quot; financial &quot; sense of &quot; interest &quot; Table 1:A sample of our topic signatures. Signature M was extracted from a manually-sense-tagged corpus and A was produced by our algorithm. Words occurring in both A and M are marked in bold.</Paragraph> <Paragraph position="10"> The topic signatures we acquired contain rich topical information. But they do not provide any other types of linguistic knowledge. Since they were created by word to word translation, syntactic analysis of them is not possible. Even the distances between the target ambiguous word and its context words are not reliable because of differences in word order between Chinese and English. Table 1 lists two sets of topic signatures, each containing the 20 most frequent nouns, ranked by occurrence count, that surround instances of the financial sense of interest. One set was extracted from a hand-tagged corpus (Bruce and Wiebe, 1994) and the other by our algorithm.</Paragraph> </Section> <Section position="4" start_page="0" end_page="2" type="metho"> <SectionTitle> 3 Application on WSD </SectionTitle> <Paragraph position="0"> To evaluate the usefulness of the topic signatures acquired, we applied them in a WSD task. We adopted an algorithm similar to Sch&quot;utze's (1998) context-group discrimination, which determines a word sense according to the semantic similarity of contexts, computed using a second-order co-occurrence vector model. In this section, we firstly introduce our adaptation of this algorithm, and then describe the disambiguation experiments on 6 words for which a gold standard is available.</Paragraph> <Section position="1" start_page="0" end_page="2" type="sub_section"> <SectionTitle> 3.1 Context-Group Discrimination </SectionTitle> <Paragraph position="0"> We chose the so-called context-group discrimination algorithm because it disambiguates instances only relying on topical information, which happens to be what our topic signatures specialise in7. The original context-group discrimination is a disambiguation algorithm based on clustering. Words, contexts and senses are represented in Word Space, a high-dimensional, real-valued space in which closeness corresponds to semantic similarity. Similarity in Word Space is based on second-order co-occurrence: two tokens (or contexts) of the ambiguous word are assigned to the same sense cluster if the words they co-occur with themselves occur with similar words in a training corpus. The number of sense clusters determines sense granularity.</Paragraph> <Paragraph position="1"> In our adaptation of this algorithm, we omitted the clustering step, because our data has already been sense classified according to the senses defined in the English-Chinese dictionary. In other words, our algorithm performs sense classification by using a bilingual lexicon and the level of sense granularity of the lexicon determines the sense distinctions that our system can handle: a finer-grained lexicon would enable our system to identify finer-grained senses. Also, our adaptation represents senses in Concept Space, in contrast to Word Space in the original algorithm. This is because our topic signatures are not realised in the form of words, but concepts. For example, a topic signature may consist of fduty, tariff, customs dutyg, which represents a concept of &quot;a government tax on imports or exports&quot;.</Paragraph> <Paragraph position="2"> A vector for concept c is derived from all the close neighbours of c, where close neighbours refer to all concepts that co-occur with c in a context window. The size of the window is around 100 7Using our topic signatures as training data, other classification algorithms would also work on this WSD task. words. The entry for concept c0 in the vector for c records the number of times that c0 occurs close to c in the corpus. It is this representational vector space that we refer to as Concept Space.</Paragraph> <Paragraph position="3"> In our experiments, we chose concepts that serve as dimensions of Concept Space using a frequency cut-off. We count the number of occurrences of any concepts that co-occur with the ambiguous word within a context window. The 2;500 most frequent concepts are chosen as the dimensions of the space. Thus, the Concept Space was formed by collecting a n-by-2;500 matrix M, such that element mij records the number of times that concept i and j co-occur in a window, where n is the number of concept vectors that occur in the corpus. Row l of matrix M represents concept vector l.</Paragraph> <Paragraph position="4"> We measure the similarity of two vectors by the cosine score:</Paragraph> <Paragraph position="6"> where ~v and ~w are vectors and N is the dimension of the vector space. The more overlap there is between the neighbours of the two words whose vectors are compared, the higher the score.</Paragraph> <Paragraph position="7"> Contexts are represented as context vectors in Concept Space. A context vector is the sum of the vectors of concepts that occur in a context window. If many of the concepts in a window have a strong component for one of the topics, then the sum of the vectors, the context vector, will also have a strong component for the topic. Hence, the context vector indicates the strength of different topical or semantic components in a context.</Paragraph> <Paragraph position="8"> Senses are represented as sense vectors in Concept Space. A vector of sense si is the sum of the vectors of contexts in which the ambiguous word realises si. Since our topic signatures are classified naturally according to definitions in a bilingual dictionary, calculation of the vector for sense si is fairly straightforward: simply sum all the vectors of the contexts associated with sense si.</Paragraph> <Paragraph position="9"> After the training phase, we have obtained a sense vector ~vi for each sense si of an ambiguous word w. Then, we perform the following steps to tag an occurrence t of w: 1. Compute the context vector ~c for t in Concept Space by summing the vectors of the concepts in t's context. Since the basic units of the test data are words rather than concepts, we have to convert all words in the test data into concepts. A simple way to achieve this is to replace a word v with all the concepts that contain v. 2. Compute the cosine scores between all sense vectors of w and ~c, and then assign t to the sense si whose sense vector ~sj is closest to ~c.</Paragraph> </Section> <Section position="2" start_page="2" end_page="2" type="sub_section"> <SectionTitle> 3.2 Experiments and Results </SectionTitle> <Paragraph position="0"> We tested our system on 6 nouns, as shown in Table 2, which also shows information on the training and test data we used in the experiments. The training sets for motion, plant and tank are topic signatures extracted from the CGC; whereas those for bass, crane and palm are obtained from both CGC and the People's Daily On-line. This is because the Chinese translation equivalents of senses of the latter 3 words don't occur frequently in CGC, and we had to seek more data from the Web.</Paragraph> <Paragraph position="1"> Where applicable, we also limited the training data of each sense to a maximum of 6;000 instances for efficiency purposes.</Paragraph> <Paragraph position="2"> 2. music 1. fish crane 2301 74.7% 2. machine 1. bird 1472 829 71 24 performance, and the results.</Paragraph> <Paragraph position="3"> The test data is a binary sense-tagged corpus, the TWA Sense Tagged Data Set, manually produced by Rada Mihalcea and Li Yang (Mihalcea, 2003), from text drawn from the British National Corpus. We calculated a 'supervised' baseline from the annotated data by assigning the most frequent sense in the test data to all instances, although it could be argued that the baseline for unsupervised disambiguation should be computed by randomly assigning one of the senses to instances (e.g. it would be 50% for words with two senses). According to our previous description, the 2;500 most frequent concepts were selected as dimensions. The number of features in a Concept Space depends on how many unique concepts actually occur in the training sets. Larger amounts of training data tend to yield a larger set of features. At the end of the training stage, for each sense, a sense vector was produced. Then we lemmatised the test data and extracted a set of context vectors for all instances in the same way. For each instance in the test data, the cosine scores between its context vector and all possible sense vectors acquired through training were calculated and compared, and then the sense scoring the highest was allocated to the instance.</Paragraph> <Paragraph position="4"> The results of the experiments are also given in Table 2 (last column). Using our topic signatures, we obtained good results: the accuracy for all words exceeds the supervised baseline, except for motion which approaches it. The Chinese translations for motion are also ambiguous, which might be the reason that our WSD system performed less well on this word. However, as we mentioned, to avoid this problem, we could have expanded motion's Chinese translations, using their Chinese monosemous synonyms, when we query the Chinese corpus or the Web. Considering our system is unsupervised, the results are very promising. An indicative comparison might be with the work of Mihalcea (2003), who with a very different approach achieved similar performance on the same test data.</Paragraph> </Section> </Section> <Section position="5" start_page="2" end_page="2" type="metho"> <SectionTitle> 4 Discussion </SectionTitle> <Paragraph position="0"> Although these results are promising, higher quality topic signatures would probably yield better results in our WSD experiments. There are a number of factors that could affect the acquisition process, which determines the quality of this resource.</Paragraph> <Paragraph position="1"> Firstly, since the translation was achieved by looking up in a bilingual dictionary, the deficiencies of the dictionary could cause problems. For example, the LDC Chinese-English Lexicon we used is not up to date, for example, lacking entries for words such as^-(mobile phone),pO (the Internet), etc. This defect makes our WSD algorithm unable to use the possibly strong topical information contained in those words. Secondly, errors generated during Chinese segmentation could affect the distributions of words. For example, a Chinese string ABC may be segmented as either A+BC or AB +C; assuming the former is correct whereas AB + C was produced by the segmenter, distributions of words A, AB, BC, and C are all affected accordingly. Other factors such as cultural differences reflected in the different languages could also affect the results of this knowledge acquisition process.</Paragraph> <Paragraph position="2"> In our experiments, we adopted Chinese as a source language to retrieve English topic signatures. Nevertheless, our technique should also work on other distant language pairs, as long as there are existing bilingual lexicons and large monolingual corpora for the languages used. For example, one should be able to build French topic signatures using Chinese text, or Spanish topic signatures from Japanese text. In particular cases, where one only cares about translation ambiguity, this technique can work on any language pair.</Paragraph> </Section> class="xml-element"></Paper>