File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/abstr/94/p94-1020_abstr.xml
Size: 29,734 bytes
Last Modified: 2025-10-06 13:48:16
<?xml version="1.0" standalone="yes"?> <Paper uid="P94-1020"> <Title>Word-Sense Disambiguation Using Decomposable Models</Title> <Section position="1" start_page="0" end_page="143" type="abstr"> <SectionTitle> Abstract </SectionTitle> <Paragraph position="0"> Most probabilistic classifiers used for word-sense disambiguation have either been based on only one contextual feature or have used a model that is simply assumed to characterize the interdependencies among multiple contextual features. In this paper, a different approach to formulating a probabilistic model is presented along with a case study of the performance of models produced in this manner for the disambiguation of the noun interest. We describe a method for formulating probabilistic models that use multiple contextual features for word-sense disambiguation, without requiring untested assumptions regarding the form of the model. Using this approach, the joint distribution of all variables is described by only the most systematic variable interactions, thereby limiting the number of parameters to be estimated, supporting computational efficiency, and providing an understanding of the data.</Paragraph> <Paragraph position="1"> Introduction This paper presents a method for constructing probabilistic classifiers for word-sense disambiguation that offers advantages over previous approaches. Most previous efforts have not attempted to systematically identify the interdependencies among contextual features (such as collocations) that can be used to classify the meaning of an ambiguous word. Many researchers have performed disambiguation on the basis of only a single feature, while others who do consider multiple contextual features assume that all contextual features are either conditionally independent given the sense of the word or fully independent. Of course, all contextual features could be treated as interdependent, but, if there are several features, such a model could have too many parameters to estimate in practice.</Paragraph> <Paragraph position="2"> We present a method for formulating probabilistic models that describe the relationships among all variables in terms of only the most important interdependencies, that is, models of a certain class that are good approximations to the joint distribution of contextual features and word meanings. This class is the set of decomposable models: models that can be expressed as a product of marginal distributions, where each marginal is composed of interdependent variables. The test used to evaluate a model gives preference to those that have the fewest number of interdependencies, thereby selecting models expressing only the most systematic variable interactions.</Paragraph> <Paragraph position="3"> To summarize the method, one first identifies informative contextual features (where &quot;informative&quot; is a well-defined notion, discussed in Section 2). Then, out of all possible decomposable models characterizing interdependency relationships among the selected variables, those that are found to produce good approximations to the data are identified (using the test mentioned above) and one of those models is used to perform disambiguation. Thus, we are able to use multiple contextual features without the need for untested assumptions regarding the form of the model. Further, approximating the joint distribution of all variables with a model identifying only the most important systematic interactions among variables limits the number of parameters to be estimated, supports computational efficiency, and provides an understanding of the data. The biggest limitation associated with this method is the need for large amounts of sense-tagged data. Because asymptotic distributions of the test statistics are used, the validity of the results obtained using this approach are compromised when it is applied to sparse data (this point is discussed further in Section 2).</Paragraph> <Paragraph position="4"> To test the method of model selection presented in this paper, a case study of the disambiguation of the noun interest was performed. Interest was selected because it has been shown in previous studies to be a difficult word to disambiguate. We selected as the set of sense tags all non-idiomatic noun senses of interest defined in the electronic version of Longman's Dictionary of Contemporary English (LDOCE) (\[23\]). Using the models produced in this study, we are able to assign an LDOCE sense tag to every usage of interest in a held-out test set with 78% accuracy. Although it is difficult to compare our results to those reported for previous disambiguation experiments, as will be discussed later, we feel these results are encouraging.</Paragraph> <Paragraph position="5"> The remainder of the paper is organized as follows.</Paragraph> <Paragraph position="6"> Section 2 provides a more complete definition of the methodology used for formulating decomposable models and Section 3 describes the details of the case study performed to test the approach. The results of the disambiguation case study are discussed and contrasted with similar efforts in Sections 4 and 5. Section 6 is the conclusion.</Paragraph> <Section position="1" start_page="139" end_page="141" type="sub_section"> <SectionTitle> Decomposable Models </SectionTitle> <Paragraph position="0"> In this Section, we address the problem of finding the models that generate good approximations to a given discrete probability distribution, as selected from among the class of decomposable models. Decomposable models are a subclass of log-linear models and, as such, can be used to characterize and study the structure of data (\[2\]), that is, the interactions among variables as evidenced by the frequency with which the values of the variables co-occur. Given a data sample of objects, where each object is described by d discrete variables, let x=(zz, z2,..., zq) be a q-dimensional vector of counts, where each zi is the frequency with which one of the possible combinations of the values of the d variables occurs in the data sample (and the frequencies of all such possible combinations are included in x). The log-linear model expresses the logarithm of E\[x\] (the mean of x) as a linear sum of the contributions of the &quot;effects&quot; of the variables and the interactions among the variables.</Paragraph> <Paragraph position="1"> Assume that a random sample consisting of N independent and identical tridls (i.e., all trials are described by the same probability density function) is drawn from a discrete d-variate distribution. In such a situation, the outcome of each trial must be an event corresponding to a particular combination of the values of the d variables.</Paragraph> <Paragraph position="2"> Let Pi be the probability that the ith event (i.e., the i th possible combination of the values of all variables) occurs on any trial and let zi be the number of times that the i th event occurs in the random sample. Then (zt, x2,..., zq) has a multinomiM distribution with parameters N and Pl,..., Pq- For a given sample size, N, the likelihood of selecting any particular random sample is defined once the population parameters, that is, the Pi'S or, equivalently, the E\[xi\]'s (where E\[zi\] is the mean frequency of event i), are known. Log-linear models express the value of the logarithm of each E\[~:i\] or p; as a linear sum of a smaller (i.e., less than q) number of new population parameters that characterize the effects of individual variables and their interactions.</Paragraph> <Paragraph position="3"> The theory of log-linear models specifies the sufficient slatislics (functions of x) for estimating the effects of each variable and of each interaction among variables on E\[x\]. The sufficient statistics are the sample counts from the highest-order marginals composed of only interdependent variables. These statistics are the maximum likelihood estimates of the mean values of the corresponding marginals distributions. Consider, for example, a random sample taken from a population in which four contextual features are used to characterize each occurrence of an ambiguous word. The sufficient statistics for the model describing contextual features one and two as independent but all other variables as interdependent are, for all i, j, k, m, n (in this and all subsequent equations, f is an abbreviation for feature):</Paragraph> <Paragraph position="5"> Within the class of decomposable models, the maximum likelihood estimate for E\[x\] reduces to the product of the sufficient statistics divided by the sample counts defined in the marginals composed of the common elements in the sufficient statistics. As such, decomposable models are models that can be expressed as a product of marginals, 1 where each marginal consists of only interdependent variables.</Paragraph> <Paragraph position="6"> Returning to our previous example, the maximum likelihood estimate for E\[x\] is, for all i,j, k, m, n:</Paragraph> <Paragraph position="8"> Expressing the population parameters as probabilities instead of expected counts, the equation above can be rewritten as follows, where the sample marginal relative frequencies are the maximum likelihood estimates of the population marginal probabilities. For all i,j,k,m,n:</Paragraph> <Paragraph position="10"> The degree to which the data is approximated by a model is called the fit of the model. In this work, the likelihood ratio statistic, G 2, is used as the measure of the goodness-of-fit of a model. It is distributed asymptotically as X z with degrees of freedom corresponding to the number of interactions (and/or variables) omitted from (unconstrained in) the model. Accessing the fit of counts or relative frequencies, depending on whether the parameters are expressed as expected frequencies or probabilities, respectively.</Paragraph> <Paragraph position="11"> of a model in terms of the significance of its G 2 statistic gives preference to models with the fewest number of interdependencies, thereby assuring the selection of a model specifying only the most systematic variable interactions.</Paragraph> <Paragraph position="12"> Within the framework described above, the process of model selection becomes one of hypothesis testing, where each pattern of dependencies among variables expressible in terms of a decomposable model is postulated as a hypothetical model and its fit to the data is evaluated. The &quot;best fitting&quot; models are identified, in the sense that the significance of their reference X 2 values are large, and, from among this set, a conceptually appealing model is chosen. The exhaustive search of decomposable models can be conducted as described in \[12\].</Paragraph> <Paragraph position="13"> What we have just described is a method for approximating the joint distribution of all variables with a model containing only the most important systematic interactions among variables. This approach to model formulation limits the number of parameters to be estimated, supports computational efficiency, and provides an understanding of the data. The single biggest limitation remaining in this day of large memory, high speed computers results from reliance on asymptotic theory to describe the distribution of the maximum likelihood estimates and the likelihood ratio statistic. The effect of this reliance is felt most acutely when working with large sparse multinomials, which is exactly when this approach to model construction is most needed. When the data is sparse, the usual asymptotic properties of the distribution of the likelihood ratio statistic and the maximum likelihood estimates may not hold. In such cases, the fit of the model will appear to be too good, indicating that the model is in fact over constrained for the data available. In this work, we have limited ourselves to considering only those models with sufficient statistics that are not sparse, where the significance of the reference X 2 is not unreasonable; most such models have sufficient statistics that are lower-order marginal distributions. In the future, we will investigate other goodness-of-fit tests (\[18\], \[1\], \[22\]) that are perhaps more appropriate for sparse data.</Paragraph> <Paragraph position="14"> The Experiment Unlike several previous approaches to word sense disambiguation (\[29\], \[5\], \[7\], \[10\]), nothing in this approach limits the selection of sense tags to a particular number or type of meaning distinctions. In this study, our goal was to address a non-trivial case of ambiguity, but one that would allow some comparison of results with previous work. As a result of these considerations, the word interest was chosen as a test case, and the six non-idiomatic noun senses of interest defined in LDOCE were selected as the tag set. The only restriction limiting the choice of corpus is the need for large amounts of on-line data. Due to availability, the Penn Treebank Wall Street Journal corpus was selected.</Paragraph> <Paragraph position="15"> In total, 2,476 usages 2 of interest as a noun 3 were automatically extracted from the corpus and manually assigned sense tags corresponding to the LDOCE definitions. null During tagging, 107 usages were removed from the data set due to the authors' inability to classify them in terms of the set of LDOCE senses. Of the rejected usages, 43 are metonymic, and the rest are hybrid meanings specific to the domain, such as public interest group.</Paragraph> <Paragraph position="16"> Because our sense distinctions are not merely between two or three clearly defined core senses of a word, the task of hand-tagging the tokens of interest required subtle judgments, a point that has also been observed by other researchers disambiguating with respect to the full set of LDOCE senses (\[6\], \[28\]). Although this undoubtedly degraded the accuracy of the manually assigned sense tags (and thus the accuracy of the study as well), this problem seems unavoidable when making semantic distinctions beyond clearly defined core senses of a word (\[17\], \[11\], \[14\], \[15\]).</Paragraph> <Paragraph position="17"> Of the 2,369 sentences containing the sense-tagged usages of interest, 600 were randomly selected and set aside to serve as the test set. The distribution of sense tags in the data set is presented in Table 1.</Paragraph> <Paragraph position="18"> We now turn to the selection of individually informative contextual features. In our approach to disambiguation, a contextual feature is judged to be informative (i.e., correlated with the sense tag of the ambiguous word) if the model for independence between that feature and the sense tag is judged to have an extremely poor fit using the test described in Section 2. The worse the fit, the more informative the feature is judged to be (similar to the approach suggested in \[9\]).</Paragraph> <Paragraph position="19"> Only features whose values can be automatically determined were considered, and preference was given to features that intuitively are not specific to interest (but see the discussion of collocational features below). An additional criterion was that the features not have too many possible values, in order to curtail sparsity in the resulting data matrix.</Paragraph> <Paragraph position="20"> We considered three different types of contextual features: morphological, collocation-specific, and classbased, with part-of-speech (POS) categories serving as the word classes. Within these classes, we choose a number of specific features, each of which was judged to be informative as described above. We used one morphological feature: a dichotomous variable indicating the presence or absence of the plural form. The values of the class-based variables are a set of twenty-five POS tags formed, with one exception, from the first letter of the tags used in the Penn Treebank corpus. Two different sets of class-based variables were selected. The first set contained only the POS tags of the word immediately preceding and the word immediately succeeding the ambiguous word, while the second set was extended to include the POS tags of the two immediately preceding and two succeeding words.</Paragraph> <Paragraph position="21"> A limited number of collocation-specific variables were selected, where the term collocation is used loosely to refer to a specific spelling form occurring in the same sentence as the ambiguous word. All of our colloeational variables are dichotomous, indicating the presence or absence of the associated spelling form. While collocation-specific variables are, by definition, specific to the word being disambiguated, the procedure used to select them is general. The search for collocation-specific variables was limited to the 400 most frequent spelling forms in a data sample composed of sentences containing interest. Out of these 400, the five spelling forms found to be the most informative using the test described above were selected as the collocational variables. null It is not enough to know that each of the features described above is highly correlated with the meaning of the ambiguous word. In order to use the features in concert to perform disambiguation, a model describing the interactions among them is needed. Since we had no reason to prefer, a priori, one form of model over another, all models describing possible interactions among the features were generated, and a model with good fit was selected. Models were generated and tested as described in Section 2.</Paragraph> </Section> <Section position="2" start_page="141" end_page="142" type="sub_section"> <SectionTitle> Results </SectionTitle> <Paragraph position="0"> Both the form and the performance of the model selected for each set of variables is presented in Table 2.</Paragraph> <Paragraph position="1"> Performance is measured in terms of the total percentage of the test set tagged correctly by a classifier using the specified model. This measure combines both precision and recall. Portions of the test set that are not covered by the estimates of the parameters made from the training set are not tagged and, therefore, counted as wrong.</Paragraph> <Paragraph position="2"> The form of the model describes the interactions among the variables by expressing the joint distribution of the values of all contextual features and sense tags as a product of conditionally independent marginals, with each marginal being composed of non-independent variables. Models of this form describe a markov field (\[8\], \[21\]) that can be represented graphically as is shown in Figure 1 for Model 4 of Table 2. In both Figures 1 and 2, each of the variables short, in, pursue, rate(s), percent (i.e., the sign '%') is the presence or absence of that spelling form. Each of the variables rlpos, r2pos, llpos, and 12pos is the POS tag of the word 1 or 2 positions to the left (/) or right (r). The variable ending is whether interest is in the singular or plural, and the variable tag is the sense tag assigned to interest.</Paragraph> <Paragraph position="3"> The graphical representation of Model 4 is such that there is a one-to-one correspondence between the nodes of the graph and the sets of conditionally independent variables in the model. The semantics of the graph topology is that all variables that are not directly connected in the graph are conditionally independent given the values of the variables mapping to the connecting nodes. For example, if node a separates node b from node c in the graphical representation of a markov field, then the variables mapping to node b are conditionally independent of the variables mapping to node c given the values of the variables mapping to node a. In the case of Model 4, Figure 1 graphically depicts the fact that the value of the morphological variable ending is conditionally independent of the values of all other contextual features given the sense tag of the ambiguous word.</Paragraph> <Paragraph position="5"> The Markov field depicted in Figure 1 is represented by an undirected graph because conditional independence is a symmetric relationship. But decomposable models can also be characterized by directed graphs and interpreted according to the semantics of a Bayesian network (\[21\]; also described as &quot;recursive causal models&quot; in \[27\] and \[16\]). In a Bayesian network, the notions of causation and influence replace the notion of conditional independence in a Markov field. The parents of a variable (or set of variables) V are those variables judged to be the direct causes or to have direct influence on the value of V; V is called a &quot;response&quot; to those causes or influences. The Bayesian network representation of a decomposable model embodies an explicit ordering of the n variables in the model such that variable i may be considered a response to some or all of variables {i + 1,..., n}, but is not thought of as a response to any one of the variables {1 .... , i - 1}.</Paragraph> <Paragraph position="6"> In all models presented in this paper, the sense tag of the ambiguous word causes or influences the values of all other variables in the model. The Bayesian network representation of Model 4 is presented in Figure 2. In Model 4, the variables in and percent are treated as influencing the values of rate, short, and pursue in order to achieve an ordering of variables as described above.</Paragraph> <Paragraph position="7"> Many researchers have avoided characterizing the interactions among multiple contextual features by considering only one feature in determining the sense of an ambiguous word. Techniques for identifying the optimum feature to use in disambiguating a word are presented in \[7\], \[30\] and \[5\]. Other works consider multiple contextual features in performing disambiguation without formally characterizing the relationships among the features. The majority of these efforts (\[13\], \[31\]) weight each feature in predicting the sense of an ambiguous word in accordance with frequency information, without considering the extent to which the features co-occur with one another. Gale, Church and Yarowsky (\[10\]) and Yarowsky (\[29\]) formally characterize the interactions that they consider in their model, but they simply assume that their model fits the data.</Paragraph> <Paragraph position="8"> Other researchers have proposed approaches to systematically combining information from multiple contextual features in determining the sense of an ambiguous word. Schutze (\[26\]) derived contextual features from a singular value decomposition of a matrix of letter four-gram co-occurrence frequencies, thereby assuring the independence of all features. Unfortunately, interpreting a contextual feature that is a weighted combination of letter four-grams is difficult. Further, the clustering procedure used to assign word meaning based on these features is such that the resulting sense clusters do not have known statistical properties. This makes it impossible to generalize the results to other data sets.</Paragraph> <Paragraph position="9"> Black (\[3\]) used decision trees (\[4\]) to define the relationships among a number of pre-specified contextual features, which he called &quot;contextual categories&quot;, and the sense tags of an ambiguous word. The tree construction process used by Black partitions the data according to the values of one contextual feature before considering the values of the next, thereby treating all features incorporated in the tree as interdependent. The method presented here for using information from multiple contextual features is more flexible and makes better use of a small data set by eliminating the need to treat all features as interdependent.</Paragraph> <Paragraph position="10"> The work that bears the closest resemblance to the work presented here is the maximum entropy approach to developing language models (\[24\], \[25\], \[19\] and \[20\]). Although this approach has not been applied to word-sense disambiguation, there is a strong similarity between that method of model formulation and our own.</Paragraph> <Paragraph position="11"> A maximum entropy model for multivariate data is the likelihood function with the highest entropy that satisfies a pre-defined set of linear constraints on the underlying probability estimates. The constraints describe interactions among variables by specifying the expected frequency with which the values of the constrained variables co-occur. When the expected frequencies specified in the constraints are linear combinations of the observed frequencies in the training data, the resulting maximum entropy model is equivalent to a maximum likelihood model, which is the type of model used here.</Paragraph> <Paragraph position="12"> To date, in the area of natural language processing, the principles underlying the formulation of maximum entropy models have been used only to estimate the parameters of a model. Although the method described in this paper for finding a good approximation to the joint distribution of a set of discrete variables makes use of maximum likelihood models, the scope of the technique we are describing extends beyond parameter estimation to include selecting the form of the model that approximates the joint distribution.</Paragraph> <Paragraph position="13"> Several of the studies mentioned in this Section have used interest as a test case, and all of them (with the exception of Schutze \[26\]) considered four possible meanings for that word. In order to facilitate comparison of our work with previous studies, we re-estimated the parameters of our best model and tested it using data containing only the four LDOCE senses corresponding to those used by others (usages not tagged as being one of these four senses were removed from both the test and training data sets). The results of the modified experiment along with a summary of the published results of previous studies are presented in Table 3.</Paragraph> <Paragraph position="14"> While it is true that all of the studies reported in Table 3 used four senses of interest, it is not clear that any of the other experimental parameters were held constant in all studies. Therefore, this comparison is only suggestive. In order to facilitate more meaningful comparisons in the future, we are donating the data used in this experiment to the Consortium for Lexical Research (ftp site: clr.nmsu.edu) where it will be available to all interested parties.</Paragraph> </Section> <Section position="3" start_page="142" end_page="143" type="sub_section"> <SectionTitle> Conclusions and Future Work </SectionTitle> <Paragraph position="0"> * In this paper, we presented a method for formulating probabilistic models that use multiple contextual features for word-sense disambiguation without requiring untested assumptions regarding the form of the model.</Paragraph> <Paragraph position="1"> In this approach, the joint distribution of all variables is described by only the most systematic variable interactions, thereby limiting the number of parameters to be estimated, supporting computational efficiency, and providing an understanding of the data. Further, different types of variables, such as class-based and collocation-specific ones, can be used in combination with one another. We also presented the results of a study testing this approach. The results suggest that the models produced in this study perform as well as or better than previous efforts on a difficult test case. We are investigating several extensions to this work.</Paragraph> <Paragraph position="2"> In order to reasonably consider doing large-scale word-sense disambiguation, it is necessary to eliminate the need for large amounts of manually sense-tagged data.</Paragraph> <Paragraph position="3"> In the future, we hope to develop a parametric model or models applicable to a wide range of content words and to estimate the parameters of those models from untagged data. To those ends, we are currently investigating a means of obtaining maximum likelihood estimates of the parameters of decomposable models from untagged data. The procedure we are using is a variant of the EM algorithm that is specific to models of the form produced in this study. Preliminary results are mixed, with performance being reasonably good on models with low-order marginals (e.g., 63% of the test set was tagged correctly with Model 1 using parameters estimated in this manner) but poorer on models with higher-order marginals, such as Model 4. Work is needed to identify and constrain the parameters that cannot be estimated from the available data and to determine the amount of data needed for this procedure.</Paragraph> <Paragraph position="4"> We also hope to integrate probabilistic disambiguation models, of the type described in this paper, with a constraint-based knowledge base such as WordNet. In the past, there have been two types of approaches to word sense disambiguation: 1) a probabilistic approach such as that described here which bases the choice of sense tag on the observed joint distribution of the tags and contextual features, and 2) a symbolic knowledge based approach that postulates some kind of relational or constraint structure among the words to be tagged.</Paragraph> <Paragraph position="5"> We hope to combine these methodologies and thereby derive the benefits of both. Our approach to combining these two paradigms hinges on the network representations of our probabilistic models as described in Section 4 and will make use of the methods presented in \[21\].</Paragraph> </Section> </Section> class="xml-element"></Paper>