File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/02/w02-0806_metho.xml
Size: 11,495 bytes
Last Modified: 2025-10-06 14:08:04
<?xml version="1.0" standalone="yes"?> <Paper uid="W02-0806"> <Title>Assessing System Agreement and Instance Difficulty in the Lexical Sample Tasks of SENSEVAL-2</Title> <Section position="3" start_page="0" end_page="0" type="metho"> <SectionTitle> 2 Pairwise System Agreement </SectionTitle> <Paragraph position="0"> Assessing agreement among systems sheds light on whether their combined performance is potentially more accurate than that of any of the individual systems. If several systems are largely in agreement, then there is little benefit in combining them since they are redundant and will simply reinforce one another. However, if some systems disambiguate instances that others do not, then they are complementary and it may be possible to combine them to take advantage of the different strengths of each system to improve overall accuracy.</Paragraph> <Paragraph position="1"> The kappa statistic (Cohen, 1960) is a measure of agreement between multiple systems (or judges) that is scaled by the agreement that would be expected just by chance. A value of 1.00 suggests complete agreement, while 0.00 indicates pure chance agreement. Negative values indicate agreement less than what would be expected by chance. (Krippendorf, 1980) points out that it is difficult to specify a particular value of kappa as being generally indicative of agreement. As such we simply use kappa as a tool for comparison and relative ranking. A detailed discussion on the use of kappa in natural language processing is presented in (Carletta, 1996).</Paragraph> <Paragraph position="2"> July 2002, pp. 40-46. Association for Computational Linguistics. Disambiguation: Recent Successes and Future Directions, Philadelphia, Proceedings of the SIGLEX/SENSEVAL Workshop on Word Sense To study agreement we have made a series of pair-wise comparisons among the systems included in the English and Spanish lexical sample tasks. Each pair-wise combination is represented in a 2 A2 2 contingency table, where one cell represents the number of test instances that both systems disambiguate correctly, one cell represents the number of instances where both systems are incorrect, and there are two cells to represent the counts when only one system is correct. Agreement does not imply accuracy, since two systems may get a large number of the same instances incorrect and have a high rate of agreement.</Paragraph> <Paragraph position="3"> Tables 2 and 3 show the system pairs in the English and Spanish lexical sample tasks that exhibit the highest level of agreement according to the kappa statistic. The values in the both-one-zero column indicate the percentage of instances where both systems are correct, where only one is correct, and where neither is correct. The top 15 pairs are shown for nouns and verbs, and the top 10 for adjectives.</Paragraph> <Paragraph position="4"> A complete list would include about 250 pairs for each part of speech for English and 24 such pairs for Spanish.</Paragraph> <Paragraph position="5"> The utility of kappa agreement is confirmed in that system pairs known to be very similar have correspondingly high measures. In Table 2, duluth2 and duluth3 exhibit a high kappa value for all parts of speech. This is expected since duluth3 is an ensemble approach that includes duluth2 as one of its members. The same relationship exists between duluth7 and duluth8 in the Spanish lexical sample, and comparable behavior is seen in Table 3.</Paragraph> <Paragraph position="6"> A more surprising case is the even higher level of agreement between the most common sense base-line and the lesk corpus baseline shown in Table 2. This is not necessarily expected, and suggests that lesk corpus may not be finding a significant number of matches between the Senseval contexts and the WordNet glosses (as the lesk algorithm would hope to do) but instead may be relying on a simple default in many cases.</Paragraph> <Paragraph position="7"> In previous work (Pedersen, 2001) we propose a 50-25-25 rule that suggests that about half of the instances in a supervised word sense disambiguation evaluation will be fairly easy for most systems to resolve, another quarter will be harder but possible for at least some systems, and that the final quarter will be very difficult for any system to resolve. This same idea could also be expressed by stating that the kappa agreement between two word sense disambiguation systems will likely be around 0.50.</Paragraph> <Paragraph position="8"> In fact this is a common result in the full set of pair-wise comparisons, particularly for overall results not broken down by part of speech. Tables 2 and 3 only list the largest kappa values, but even there kappa quickly reduces towards 0.50. These same tables show that it is rare for two systems to agree on more than 60% of the correctly disambiguated instances.</Paragraph> </Section> <Section position="4" start_page="0" end_page="0" type="metho"> <SectionTitle> 3 Optimal Combination </SectionTitle> <Paragraph position="0"> An optimal combination is the accuracy that could be attained by a hypothetical tool called an optimal combiner that accepts as input the sense assignments for a test instance as generated by several different systems. It is able to select the correct sense from these inputs, and will only be wrong when none of the sense assignments is the correct one. Thus, the percentage accuracy of an optimal combiner is equal to one minus the percentage of instances that no system can resolve correctly.</Paragraph> <Paragraph position="1"> Of course this is only a tool for thought experiments and is not a practical algorithm. An optimal combiner can establish an upper bound on the accuracy that could reasonably be attained over a particular sample of test instances.</Paragraph> <Paragraph position="2"> Tables 4 and 5 list the top system pairs ranked by optimal combination (1.00 - value in zero column) for the English and Spanish lexical samples.</Paragraph> <Paragraph position="3"> Kappa scores are also shown to illustrate the interaction between agreement and optimal combination.</Paragraph> <Paragraph position="4"> Optimal combination is maximized when the percentage of instances where both systems are wrong is minimized. Kappa agreement is maximized by minimizing the percentage of instances where one or the other system (but not both) is correct. Thus, the only way a system pair could have a high measure of kappa and a high measure of optimal combination is if they were very accurate systems that disambiguated many of the same test instances correctly. null System pairs with low measures of agreement are potentially quite interesting because they are the most likely to make complementary errors. For example, in Table 5 under nouns, the alicante system has a low level of agreement with all of the other systems. However, the measure of optimal combination is quite high, reaching 0.89 (1.00 - 0.11) for the pair of alicante and jhu. In fact, all seven of the other systems achieve their highest optimal combination value when paired with alicante.</Paragraph> <Paragraph position="5"> This combination of circumstances suggests that the alicante system is fundamentally different than the other systems, and is able to disambiguate a certain set of instances where the other systems fail. In fact the alicante system is different in that it is the only Spanish lexical sample system that makes use of the structure of Euro-WordNet, the source of the sense inventory.</Paragraph> </Section> <Section position="5" start_page="0" end_page="0" type="metho"> <SectionTitle> 4 Instance Difficulty </SectionTitle> <Paragraph position="0"> The difficulty of disambiguating word senses can vary considerably. A word with multiple closely related senses is likely to be more difficult than one with a few starkly drawn differences. In supervised learning, a particular sense of a word can be difficult to disambiguate if there are a small number of training examples available.</Paragraph> <Paragraph position="1"> Table 6 shows the distribution of the number of instances that are successfully disambiguated by a particular number of systems in both the English and Spanish lexical samples. The value under the # column shows the number of systems that are able to disambiguate the number of noun, verb, adjective and total instances shown in the row. The average number of training examples available for the correct answers associated with these instances is shown in parenthesis. For example, the first line shows that there were 59 noun instances that no system (of 23) could disambiguate, and that there were on average 16 training examples available for each of the correct senses for these 59 instances.</Paragraph> <Paragraph position="2"> Two very clear trends emerge. First, there are a substantial number of instances that are not disambiguated correctly by any system (262 in English, 228 in Spanish) and there are a large number of instances that are disambiguated by just a handful of systems. In the English lexical sample, there are 1,277 test instances that are correctly disambiguated by five or fewer of the 23 systems. This is nearly 30% of the test data, and confirms that this was a very challenging set of test instances.</Paragraph> <Paragraph position="3"> There is also a very clear correlation between the number of training examples available for a particular sense of a word and the number of systems that are able to disambiguate instances of that word correctly. For example, Table 6 shows that there were 174 English verb instances that no system disambiguated correctly. On average there were only 6 training examples for the correct senses of these instances. However, there were 28 instances that all 23 English systems were able to disambiguate. For these instances an average of 47 training examples were available for each correct sense.</Paragraph> <Paragraph position="4"> This correlation between instance difficulty and number of training examples may suggest that future SENSEVAL exercises provide a minimum number of training examples for each sense, or adjust the scoring to reflect the difficulty of disambiguating a sense with very few training examples.</Paragraph> <Paragraph position="5"> Finally, we assess the difficulty associated with word types by calculating the average number of systems that were able to disambiguate the instances associated with that type. This information is provided for the English and Spanish lexical samples in Tables 7 and 8. Each word is shown with its part of speech, the number of test instances, and the average number of systems that were able to disambiguate each of the test instances.</Paragraph> <Paragraph position="6"> The verb collaborate is the easiest according to this metric in the English lexical sample. It has 30 test instances that were disambiguated correctly by an average of 20.2 of the 23 systems. The verb find proves to be the most difficult, with 68 test instances disambiguated correctly by an average of 4.2 systems. A somewhat less extreme range of values is observed for the Spanish lexical sample in Table 8.</Paragraph> <Paragraph position="7"> The adjective claro had 66 test instances that were disambiguated correctly by an average of 7.6 of the 8 systems. The most difficult word was the verb conducir, which has 54 test instances that were disambiguated correctly by an average of 3.8 systems.</Paragraph> </Section> class="xml-element"></Paper>