File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/95/p95-1039_metho.xml
Size: 11,629 bytes
Last Modified: 2025-10-06 14:14:08
<?xml version="1.0" standalone="yes"?> <Paper uid="P95-1039"> <Title>Tagset P.eduction Without Information Loss</Title> <Section position="3" start_page="0" end_page="0" type="metho"> <SectionTitle> 1 Motivation </SectionTitle> <Paragraph position="0"> Statistical part-of-speech disambiguation can be efficiently done with n-gram models (Church, 1988; Cutting et al., 1992). These models are equivalent to Hidden Markov Models (HMMs) (Rabiner, 1989) of order n - 1. The states represent parts of speech (categories, tags), there is exactly one state for each category, and each state outputs words of a particular category. The transition and output probabilities of the HMM are derived from smoothed frequency counts in a text corpus.</Paragraph> <Paragraph position="1"> Generally, the categories for part-of-speech tagging are linguistically motivated and do not reflect the probability distributions or co-occurrence probabilities of words belonging to that category. It is an implicit assumption for statistical part-of-speech tagging that words belonging to the same category have similar probability distributions. But this assumption does not hold in many of the cases.</Paragraph> <Paragraph position="2"> Take for example the word cliff which could be a proper (NP) 1 or a common noun (NN) (ignoring capitalization of proper nouns for the moment). The two previous words are a determiner (AT) and an adjective (J J). The probability of cliff being a common noun is the product of the respective contextual and lexical probabilities p(N N \]AT, JJ) * p(c//fflN N), regardless of other information provided by the actual words (a sheer cliff vs. the wise Cliff). Obviously, information useful for probability estimation is not encoded in the tagset.</Paragraph> <Paragraph position="3"> On the other hand, in some cases information not needed for probability estimation is encoded in the tagset. The distributions for comparative and superlative forms of adjectives in the Susanne Corpus (Sampson, 1995) are very similar. The number of correct tag assignments is not affected when we combine the two categories. However, it does not suffice to assign the combined tag, if we are interested in the distinction between comparative and superlative form for further processing. We have to ensure that the original (interesting) tag can be restored.</Paragraph> <Paragraph position="4"> There are two contradicting requirements. On the one hand, more tags mean that there is more information about a word at hand, on the other hand, the more tags, the severer the sparse-data problem is and the larger the corpora that are needed for training.</Paragraph> <Paragraph position="5"> This paper presents a way to modify a given tagset, such that categories with similar distributions in a corpus are combined without losing information provided by the original tagset and without losing accuracy.</Paragraph> </Section> <Section position="4" start_page="0" end_page="288" type="metho"> <SectionTitle> 2 Clustering of Tags </SectionTitle> <Paragraph position="0"> The aim of the presented method is to reduce a tag-set as much as possible by combining (clustering) two or more tags without losing information and without losing accuracy. The fewer tags we have, the less parameters have to be estimated and stored, and the less severe is the sparse data problem. Incoming text will be disambiguated with the new reduced tagset, but we ensure that the original tag is still uniquely ide:.ltified by the new tag.</Paragraph> <Paragraph position="1"> The basic idea is to exploit the fact that some of the categories have a very similar frequency distribution in a corpus. If we combine categories with similar distribution characteristics, there should be only a small change in the tagging result. The main change is that single tags are replaced by a cluster of tags, from which the original has to be identified. First experiments with tag clustering showed that, even for fully automatic identification of the original tag, tagging accuracy slightly increased when the reduced tagset was used. This might be a result of having more occurrences per tag for a smaller tagset, and probability estimates are preciser.</Paragraph> <Section position="1" start_page="287" end_page="287" type="sub_section"> <SectionTitle> 2.1 Unique Identification of Original Tags </SectionTitle> <Paragraph position="0"> A crucial property of the reduced tagset is that the original tag information can be restored from the new tag, since this is the information we are interested in. The property can be ensured if we place a constraint on the clustering of tags.</Paragraph> <Paragraph position="1"> Let )'V be the set of words, C the set of clusters (i.e. the reduced tagset), and 7&quot; the original tagset. To restore the original tag from a combined tag (cluster), we need a unique function</Paragraph> <Paragraph position="3"> To ensure that there is such a unique function, we prohibit some of the possible combinations. A cluster is allowed if and only if there is no word in the lexicon which can have two or more of the original tags combined in one cluster. Formally, seeing tags as sets of words and clusters as sets of tags: VcEC, tl,t2Ec, tl~t2,wE}/Y: wEtl::~w~t2 (2) If this condition holds, then for all words w tagged with a cluster e, exactly one tag two fulfills w E twe A t~.e E c, yielding fo.,(w, c) = t o.</Paragraph> <Paragraph position="4"> So, the original tag can be restored any time and no information from the original tagset is lost.</Paragraph> <Paragraph position="5"> Example: Assume that no word in the lexicon can be both comparative (JJ R) and superlative adjective (JJT). The categories are combined to {JJR,JJT}. When processing a text, the word easier is tagged as {JJR,JJT}. Since the lexicon states that easier can be of category J JR but not of category JJT, the original tag must be J JR.</Paragraph> </Section> <Section position="2" start_page="287" end_page="287" type="sub_section"> <SectionTitle> 2.2 Criteria For Combining Tags </SectionTitle> <Paragraph position="0"> The are several criteria that can determine the qua- null lity of a particular clustering.</Paragraph> <Paragraph position="1"> 1. Compare the trigram probabilities p(BIXi , A), P(BIA, Xi), and p(XilA, B), i = 1, 2. Combine two tags X1 and X2, if these probabilities coincide to a certain extent.</Paragraph> <Paragraph position="2"> 2. Maximize the probability that the training corpus is generated by the HMM which is described by the trigram probabilities.</Paragraph> <Paragraph position="3"> 3. Maximize the tagging accuracy for a training corpus.</Paragraph> <Paragraph position="4"> Criterion (1) establishes the theoretical basis, while criteria (2) and (3) immediately show the benefit of a particular combination. A measure of similarity for (1) is currently under investigation. We chose (3) for our first experiments, since it was the easiest one to implement. The only additional effort is a separate, previously unused part of the training corpus for this purpose, the clustering part. We combine those tags into clusters which give the best results for tagging of the clustering part.</Paragraph> </Section> <Section position="3" start_page="287" end_page="287" type="sub_section"> <SectionTitle> 2.3 The Algorithm </SectionTitle> <Paragraph position="0"> The total number of potential clusterings grows exponential with the size of the tagset. Since we are interested in the reduction of large tagsets, a full search regarding all potential clusterings is not feasible. We compute the local maximum which can be found in polynomial time with a best-first search.</Paragraph> <Paragraph position="1"> We use a slight modification of the algorithm used by (Stolcke and Omohundro, 1994) for merging HMMs. Our task is very similar to theirs. Stolcke and Omohundro start with a first order tIMM where every state represents a single occurrence of a word in a corpus, and the goal is to maximize the a posteriori probability of the model. We start with a second order HMM (since we use trigrams) where each state represents a part of speech, and our goal is to maximize the tagging accuracy for a corpus.</Paragraph> <Paragraph position="2"> The clustering algorithm works as follows: 1. Compute tagging accuracy for the clustering part with the original tagset.</Paragraph> <Paragraph position="3"> 2. Loop: (a) Compute a set of candidate clusters (obeying constraint (2) mentioned in section 2.1), each consisting of two tags from the previous step.</Paragraph> <Paragraph position="4"> (b) For each candidate cluster build the resulting tagset and compute tagging accuracy for that tagset.</Paragraph> <Paragraph position="5"> (c) If tagging accuracy decreases for all combinations of tags, break from the loop.</Paragraph> <Paragraph position="6"> (d) Add the cluster which maximized the tagging accuracy to the tagset and remove the two tags previously used.</Paragraph> <Paragraph position="7"> 3. Output the resulting tagset.</Paragraph> </Section> <Section position="4" start_page="287" end_page="288" type="sub_section"> <SectionTitle> 2.4 Application of Tag Clustering </SectionTitle> <Paragraph position="0"> Two standard trigram tagging procedures were performed as the baseline. Then clustering was performed on the same data and tagging was done with the reduced tagset. The reduced tagset was only internally used, the output of the tagger consisted of the original tagset for all experiments.</Paragraph> <Paragraph position="1"> The Susanne Corpus has about 157,000 words and uses 424 tags (counting tags with indices denoting 1. parts A and B - part C 93.7% correct 2. parts A and C - part B 94.6% correct 3. part A part B part C 93.9% correct 4. part A part C part B 94.7% correct multi-word lexemes as separate tags). The tags are based on the LOB tagset (Garside et al., 1987). Three parts are taken from the corpus. Part A consists of about 127,000 words, part B of about 10,000 words, and part C of about 10,000 words. The rest of the corpus, about 10,000 words, is not used for this experiment. All parts are mutually disjunct.</Paragraph> <Paragraph position="2"> First, part A and B were used for training, and part C for testing. Then, part A and C were used for training, and part B for testing. About 6% of the words in the test parts did not occur in the training parts, i.e. they are unknown. For the moment we only care about the known words and not about the unknown words (this is treated as a separate problem). Table 1 shows the tagging results for known words.</Paragraph> <Paragraph position="3"> Clustering was applied in the next steps. In the third experiment, part A was used for trigram training, part B for clustering and part C for testing. In the fourth experiment, part A was used for trigram training, part C for clustering and part B for testing. The baseline experiments used the clustering part for the normal training procedure to ensure that better performance in the clustering experiments is not due to information provided by the additional part. Clustering reduced the tagset by 33 (third exp.), and 31 (fourth exp.) tags. The tagging results for the known words are shown in table 1.</Paragraph> <Paragraph position="4"> The improvement in the tagging result is too small to be significant. However, the tagset is reduced, thus also reducing the number of parameters without losing accuracy. Experiments with larger texts and more permutations will be performed to get precise results for the improvement.</Paragraph> </Section> </Section> class="xml-element"></Paper>