File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/98/p98-1081_metho.xml
Size: 19,764 bytes
Last Modified: 2025-10-06 14:14:54
<?xml version="1.0" standalone="yes"?> <Paper uid="P98-1081"> <Title>Improving Data Driven Wordclass Tagging by System Combination</Title> <Section position="2" start_page="491" end_page="492" type="metho"> <SectionTitle> 1 Component taggers </SectionTitle> <Paragraph position="0"> In 1992, van Halteren combined a number of taggers by way of a straightforward majority vote (cf. van Halteren 1996). Since the component taggers all used n-gram statistics to model context probabilities and the knowledge representation was hence fundamentally the same in each component, the results were limited. Now there are more varied systems available, a variety which we hope will lead to better combination effects. For this experiment we have selected four systems, primarily on the basis of availability. Each of these uses different features of the text to be tagged, and each has a completely different representation of the language model.</Paragraph> <Paragraph position="1"> The first and oldest system uses a traditional trig-ram model (Steetskamp 1995; henceforth tagger T, for Trigrams), based on context statistics P(ti\[ti-l,ti-2) and lexical statistics P(tilwi) directly estimated from relative corpus frequencies. The Viterbi algorithm is used to determine the most probable tag sequence.</Paragraph> <Paragraph position="2"> Since this model has no facilities for handling unknown words, a Memory-Based system (see below) is used to propose distributions of potential tags for words not in the lexicon.</Paragraph> <Paragraph position="3"> The second system is the Transformation Based Learning system as described by Brill (19941; henceforth tagger R, for Rules). This 1 Brill's system is available as a collection of C programs and Perl scripts at ftp ://ftp. cs. j hu. edu/pub/brill/Programs/ RULE_BASED_TAGGER_V. I. 14. tar. Z system starts with a basic corpus annotation (each word is tagged with its most likely tag) and then searches through a space of transformation rules in order to reduce the discrepancy between its current annotation and the correct one (in our case 528 rules were learned). During tagging these rules are applied in sequence to new text. Of all the four systems, this one has access to the most information: contextual information (the words and tags in a window spanning three positions before and after the focus word) as well as lexical information (the existence of words formed by suffix/prefix addition/deletion). However, the actual use of this information is severely limited in that the individual information items can only be combined according to the patterns laid down in the rule templates.</Paragraph> <Paragraph position="4"> The third system uses Memory-Based Learning as described by Daelemans et al. (1996; henceforth tagger M, for Memory). During the training phase, cases containing information about the word, the context and the correct tag are stored in memory. During tagging, the case most similar to that of the focus word is retrieved from the memory, which is indexed on the basis of the Information Gain of each feature, and the accompanying tag is selected.</Paragraph> <Paragraph position="5"> The system used here has access to information about the focus word and the two positions before and after, at least for known words. For unknown words, the single position before and after, three suffix letters, and information about capitalization and presence of a hyphen or a digit are used.</Paragraph> <Paragraph position="6"> The fourth and final system is the MXPOST system as described by Ratnaparkhi (19962; henceforth tagger E, for Entropy). It uses a number of word and context features rather similar to system M, and trains a Maximum Entropy model that assigns a weighting parameter to each feature-value and combination of features that is relevant to the estimation of the probability P(tag\[features). A beam search is then used to find the highest probability tag sequence. Both this system and Brill's system are used with the default settings that are suggested in their documentation.</Paragraph> <Paragraph position="7"> 2Ratnaparkhi's Java implementation of this system is available at ftp://ftp.cis.upenn.edu/ pub/adwait/jmx/</Paragraph> </Section> <Section position="3" start_page="492" end_page="492" type="metho"> <SectionTitle> 2 The data </SectionTitle> <Paragraph position="0"> The data we use for our experiment consists of the tagged LOB corpus (Johansson 1986). The corpus comprises about one million words, divided over 500 samples of 2000 words from 15 text types. Its tagging, which was manually checked and corrected, is generally accepted to be quite accurate. Here we use a slight adaptation of the tagset. The changes are mainly cosmetic, e.g. non-alphabetic characters such as &quot;$&quot; in tag names have been replaced. However, there has also been some retokenization: genitive markers have been split off and the negative marker &quot;n't&quot; has been reattached. An example sentence tagged with the resulting tagset is: The ATI singular or plural The tagset consists of 170 different tags (including ditto tags 3) and has an average ambiguity of 2.69 tags per wordform. The difficulty of the tagging task can be judged by the two base-line measurements in Table 2 below, representing a completely random choice from the potential tags for each token (Random) and selection of the lexically most likely tag (LexProb).</Paragraph> <Paragraph position="1"> For our experiment, we divide the corpus into three parts. The first part, called Train, consists of 80% of the data (931062 tokens), constructed 3Ditto tags are used for the components of multi-token units, e.g. if &quot;as well as&quot; is taken to be a coordination conjunction, it is tagged &quot;as_CC-1 well_CC-2 as_CC-3&quot;, using three related but different ditto tags. by taking the first eight utterances of every ten.</Paragraph> <Paragraph position="2"> This part is used to train the individual taggers. The second part, Tune, consists of 10% of the data (every ninth utterance, 114479 tokens) and is used to select the best tagger parameters where applicable and to develop the combination methods. The third and final part, Test, consists of the remaining 10% (.115101 tokens) and is used for the final performance measurements of all tuggers. Both Tune and Test contain around 2.5% new tokens (wrt Train) and a further 0.2% known tokens with new tags.</Paragraph> <Paragraph position="3"> The data in Train (for individual tuggers) and Tune (for combination tuggers) is to be the only information used in tagger construction: all components of all tuggers (lexicon, context statistics, etc.) are to be entirely data driven and no manual adjustments are to be done. The data in Test is never to be inspected in detail but only used as a benchmark tagging for quality measurement. 4</Paragraph> </Section> <Section position="4" start_page="492" end_page="493" type="metho"> <SectionTitle> 3 Potential for improvement </SectionTitle> <Paragraph position="0"> In order to see whether combination of the component tuggers is likely to lead to improvements of tagging quality, we first examine the results of the individual taggers when applied to Tune.</Paragraph> <Paragraph position="1"> As far as we know this is also one of the first rigorous measurements of the relative quality of different tagger generators, using a single tagset and dataset and identical circumstances.</Paragraph> <Paragraph position="2"> The quality of the individual tuggers (cf. Table 2 below) certainly still leaves room for improvement, although tagger E surprises us with an accuracy well above any results reported so far and makes us less confident about the gain to be accomplished with combination.</Paragraph> <Paragraph position="3"> However, that there is room for improvement is not enough. As explained above, for combination to lead to improvement, the component taggers must differ in the errors that they make.</Paragraph> <Paragraph position="4"> That this is indeed the case can be seen in Table 1. It shows that for 99.22% of Tune, at least one tagger selects the correct tag. However, it is unlikely that we will be able to identify this 4This implies that it is impossible to note if errors counted against a tagger are in fact errors in the benchmark tagging. We accept that we are measuring quality in relation to a specific tagging rather than the linguistic truth (if such exists) and can only hope the tagged LOB corpus lives up to its reputation.</Paragraph> <Paragraph position="5"> terns between the brackets give the distribution of correct/incorrect tags over the systems.</Paragraph> <Paragraph position="6"> tag in each case. We should rather aim for optimal selection in those cases where the correct tag is not outvoted, which would ideally lead to correct tagging of 98.21% of the words (in Tune).</Paragraph> </Section> <Section position="5" start_page="493" end_page="494" type="metho"> <SectionTitle> 4 Simple Voting </SectionTitle> <Paragraph position="0"> There are many ways in which the results of the component taggers can be combined, selecting a single tag from the set proposed by these taggers. In this and the following sections we examine a number of them. The accuracy measurements for all of them are listed in Table 2. 5 The most straightforward selection method is an n-way vote. Each tagger is allowed to vote for the tag of its choice and the tag with the highest number of votes is selected. 6 The question is how large a vote we allow each tagger. The most democratic option is to give each tagger one vote (Majority). However, it appears more useful to give more weight to taggers which have proved their quality. This can be general quality, e.g. each tagger votes its overall precision (TotPrecision), or quality in relation to the current situation, e.g. each tagger votes its precision on the suggested tag (Tag-Precision). The information about each tagger's quality is derived from an inspection of its results on Tune.</Paragraph> <Paragraph position="1"> 5For any tag X, precision measures which percentage of the tokens tagged X by the tagger are also tagged X in the benchmark and recall measures which percentage of the tokens tagged X in the benchmark are also tagged X by the tagger. When abstracting away from individual tags, precision and recall are equal and measure how many tokens are tagged correctly; in this case we also use the more generic term accuracy.</Paragraph> <Paragraph position="2"> 6In our experiment, a random selection from among the winning tags is made whenever there is a tie.</Paragraph> <Paragraph position="3"> taggers and Table 2: Accuracy of individual combination methods.</Paragraph> <Paragraph position="4"> But we have even more information on how well the taggers perform. We not only know whether we should believe what they propose (precision) but also know how often they fail to recognize the correct tag (recall). This information can be used by forcing each tagger also to add to the vote for tags suggested by the opposition, by an amount equal to 1 minus the recall on the opposing tag (Precision-Recall).</Paragraph> <Paragraph position="5"> As it turns out~ all voting systems outperform the best single tagger, E. 7 Also, the best voting system is the one in which the most specific information is used, Precision-Recall. However, specific information is not always superior, for TotPrecision scores higher than TagPrecision.</Paragraph> <Paragraph position="6"> This might be explained by the fact that recall information is missing (for overall performance this does not matter, since recall is equal to precision). null</Paragraph> </Section> <Section position="6" start_page="494" end_page="494" type="metho"> <SectionTitle> 5 Pairwise Voting </SectionTitle> <Paragraph position="0"> So far, we have only used information on the performance of individual taggers. A next step is to examine them in pairs. We can investigate all situations where one tagger suggests T1 and the other T2 and estimate the probability that in this situation the tag should actually be Tx, e.g.</Paragraph> <Paragraph position="1"> if E suggests DT and T suggests CS (which can happen if the token is &quot;that&quot;) the probabilities for the appropriate tag are: When combining the taggers, every tagger pair is taken in turn and allowed to vote (with the probability described above) for each possible tag, i.e. not just the ones suggested by the component taggers. If a tag pair T1-T2 has never been observed in Tune, we fall back on information on the individual taggers, viz. the probability of each tag Tx given that the tagger suggested tag Ti.</Paragraph> <Paragraph position="2"> Note that with this method (and those in the next section) a tag suggested by a minority (or even none) of the taggers still has a chance to win. In principle, this could remove the restriction of gain only in 2-2 and 1-1-1-1 cases. In practice, the chance to beat a majority is very slight indeed and we should not get our hopes up too high that this should happen very often.</Paragraph> <Paragraph position="3"> When used on Test, the pairwise voting strategy (TagPair) clearly outperforms the other voting strategies, 8 but does not yet approach the level where all tying majority votes are handled correctly (98.31%).</Paragraph> </Section> <Section position="7" start_page="494" end_page="495" type="metho"> <SectionTitle> 6 Stacked classifiers </SectionTitle> <Paragraph position="0"> From the measurements so far it appears that the use of more detailed information leads to a better accuracy improvement. It ought therefore to be advantageous to step away from the underlying mechanism of voting and to model the situations observed in Tune more closely.</Paragraph> <Paragraph position="1"> The practice of feeding the outputs of a number of classifiers as features for a next learner sit is significantly better than the runner-up (Precision-Recall) with p=0.</Paragraph> <Paragraph position="2"> is usually called stacking (Wolpert 1992). The second stage can be provided with the first level outputs, and with additional information, e.g.</Paragraph> <Paragraph position="3"> about the original input pattern.</Paragraph> <Paragraph position="4"> The first choice for this is to use a Memory-Based second level learner. In the basic version (Tags), each case consists of the tags suggested by the component taggers and the correct tag. In the more advanced versions we also add information about the word in question (Tags+Word) and the tags suggested by all taggers for the previous and the next position (Tags+Context). For the first two the similarity metric used during tagging is a straightforward overlap count; for the third we need to use an Information Gain weighting (Daelemans ct al.</Paragraph> <Paragraph position="5"> 1997).</Paragraph> <Paragraph position="6"> Surprisingly, none of the Memory-Based based methods reaches the quality of TagPair. 9 The explanation for this can be found when we examine the differences within the Memory-Based general strategy: the more feature information is stored, the higher the accuracy on Tune, but the lower the accuracy on Test. This is most likely an overtraining effect: Tune is probably too small to collect case bases which can leverage the stacking effect convincingly, especially since only 7.51% of the second stage material shows disagreement between the featured tags.</Paragraph> <Paragraph position="7"> To examine if the overtraining effects are specific to this particular second level classifier, we also used the C5.0 system, a commercial version of the well-known program C4.5 (Quinlan 1993) for the induction of decision trees, on the same training material. 1deg Because C5.0 prunes the decision tree, the overfitting of training material (Tune) is less than with Memory-Based learning, but the results on Test are also worse. We conjecture that pruning is not beneficial when the interesting cases are very rare. To realise the benefits of stacking, either more data is needed or a second stage classifier that is better suited to this type of problem.</Paragraph> <Paragraph position="8"> than TagPair (p=0.0274) and not significantly better than Precision-Recall (p=0.2766).</Paragraph> <Paragraph position="9"> 1degTags+Word could not be handled by C5.0 due to the huge number of feature values.</Paragraph> </Section> <Section position="8" start_page="495" end_page="495" type="metho"> <SectionTitle> 7 The value of combination </SectionTitle> <Paragraph position="0"> The relation between the accuracy of combinations (using TagPair) and that of the individual taggers is shown in Table 3. The most important observation is that every combination (significantly) outperforms the combination of any strict subset of its components. Also of note is the improvement yielded by the best combination. The pairwise voting system, using all four individual taggers, scores 97.92% correct on Test, a 19.1% reduction in error rate over the best individual system, viz. the Maximum Entropy tagger (97.43%).</Paragraph> <Paragraph position="1"> A major factor in the quality of the combination results is obviously the quality of the best component: all combinations with E score higher than those without E (although M, R and T together are able to beat E alone11). After that, the decisive factor appears to be the difference in language model: T is generally a better combiner than M and R, 12 even though it has the lowest accuracy when operating alone.</Paragraph> <Paragraph position="2"> A possible criticism of the proposed combi11By a margin at the edge of significance: p=0.0608. 12Although not significantly better, e.g. the differences within the group ME/ER/ET are not significant. nation scheme is the fact that for the most successful combination schemes, one has to reserve a non-trivial portion (in the experiment 10% of the total material) of the annotated data to set the parameters for the combination. To see whether this is in fact a good way to spend the extra data, we also trained the two best individual systems (E and M, with exactly the same settings as in the first experiments) on a concatenation of Train and Tune, so that they had access to every piece of data that the combination had seen. It turns out that the increase in the individual taggers is quite limited when compared to combination. The more extensively trained E scored 97.51% correct on Test (3.1% error reduction) and M 97.07% (3.9% error reduction).</Paragraph> <Paragraph position="3"> Conclusion Our experiment shows that, at least for the task at hand, combination of several different systems allows us to raise the performance ceiling for data driven systems. Obviously there is still room for a closer examination of the differences between the combination methods, e.g.</Paragraph> <Paragraph position="4"> the question whether Memory-Based combination would have performed better if we had provided more training data than just Tune, and of the remaining errors, e.g. the effects of inconsistency in the data (cf. Ratnaparkhi 1996 on such effects in the Penn Treebank corpus).</Paragraph> <Paragraph position="5"> Regardless of such closer investigation, we feel that our results are encouraging enough to extend our investigation of combination, starting with additional component taggers and selection strategies, and going on to shifts to other tagsets and/or languages. But the investigation need not be limited to wordclass tagging, for we expect that there are many other NLP tasks where combination could lead to worthwhile improvements.</Paragraph> </Section> class="xml-element"></Paper>