File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/92/p92-1053_metho.xml
Size: 10,871 bytes
Last Modified: 2025-10-06 14:13:20
<?xml version="1.0" standalone="yes"?> <Paper uid="P92-1053"> <Title>A CLASS-BASED APPROACH TO LEXICAL DISCOVERY</Title> <Section position="2" start_page="0" end_page="0" type="metho"> <SectionTitle> 2 Word/Word Relationships </SectionTitle> <Paragraph position="0"> Mutual information is an information-theoretic measure of association frequently used with natural language data to gauge the &quot;relatedness&quot; between two words z and y. It is defined as follows:</Paragraph> <Paragraph position="2"> As an example of its use, consider Itindle's \[1990\] application of mutual information to the discovery of predicate argument relations. Hindle investigates word co-occurrences as mediated by syntactic structure. A six-million-word sample of Associated Press news stories was parsed in order to construct a collection of subject/verb/object instances. On the basis of these data, Hindle calculates a co-occurrence score (an estimate of mutual information) for verb/object pairs and verb/subject pairs. Table 1 shows some of the verb/object pairs for the verb drink that occurred more than once, ranked by co-occurrence score, &quot;in effect giving the answer to the question 'what can you drink?' &quot; \[Hindle, 1990\], p. 270.</Paragraph> <Paragraph position="3"> Word/word relationships have proven useful, but are not appropriate for all applications. For example, *This work was supported by the following grants: Alto DAAL 03-89-C-0031, DARPA N00014-90-J-1863, NSF IRI 9016592, Ben Franklin 91S.3078C-1. I am indebted to Eric Brill, Henry Gleitman, Lila Gleitman, Aravind Joshi, Christine Nakatani, and Michael Niv for helpful discussions, and to drink (part of Hindle 1990, Table 2).</Paragraph> <Paragraph position="4"> the selectional preferences of a verb constitute a relationship between a verb and a class of nouns rather than an individual noun.</Paragraph> </Section> <Section position="3" start_page="0" end_page="327" type="metho"> <SectionTitle> 3 Word/Class Relationships </SectionTitle> <Paragraph position="0"/> <Section position="1" start_page="0" end_page="327" type="sub_section"> <SectionTitle> 3.1 A Measure of Association </SectionTitle> <Paragraph position="0"> In this section, I propose a method for discovering class-based relationships in text corpora on the basis of mutual information, using for illustration the problem of finding &quot;prototypical&quot; object classes for verbs.</Paragraph> <Paragraph position="1"> Let V = {vl,v~,...,vz} andAf = {nl,n2,...,nm} be the sets of verbs and nouns in a vocabulary, and C = {clc C_ Af} the set of noun classes; that is, the power set of A f. Since the relationship being investigated holds between verbs and classes of their objects, the elementary events of interest are members of V x C. The joint probability of a verb and a class is estimated as</Paragraph> <Paragraph position="3"> The association score takes the mutual information between the verb and a class, and scales it according to the likelihood that a member of that class will actually appear as the object of the verb. 1</Paragraph> </Section> <Section position="2" start_page="327" end_page="327" type="sub_section"> <SectionTitle> 3.2 Coherent Classes </SectionTitle> <Paragraph position="0"> A search among a verb's object nouns requires at most I.A/&quot; I computations of the association score, and can thus be done exhaustively. An exhaustive search among object classes is impractical, however, since the number of classes is exponential. Clearly some way to constrain the search is needed. I propose restricting the search by imposing a requirement of coherence upon the classes to be considered. For example, among possible classes of objects for open, the class {closet, locker, store} is more coherent than {closet, locker, discourse} on intuitive grounds: every noun in the former class describes a repository of some kind, whereas the latter class has no such obvious interpretation.</Paragraph> <Paragraph position="1"> The WordNet lexical database \[Miller, 1990\] provides one way to structure the space of noun classes, in order to make the search computationally feasible. WordNet is a lexical/conceptual database constructed on psycholinguistic principles by George Miller and colleagues at Princeton University. Although I cannot judge how well WordNet fares with regard to its psycholinguistic aims, its noun taxonomy appears to have many of the qualities needed if it is to provide basic taxonomic knowledge for the purpose of corpus-based research in English, including broad coverage and multiple word senses.</Paragraph> <Paragraph position="2"> Given the WordNet noun hierarchy, the definition of &quot;coherent class&quot; adopted here is straightforward. Let words(w) be the set of nouns associated with a WordNet class w. 2 Definition. A noun class e * C is coherent iff there is a WordNet class w such</Paragraph> </Section> </Section> <Section position="4" start_page="327" end_page="327" type="metho"> <SectionTitle> 4 Preliminary Results </SectionTitle> <Paragraph position="0"> An experiment was performed in order to discover the &quot;prototypical&quot; object classes for a set of 115 common English verbs. The counts of equation (2) were calculated by collecting a sample of verb/object pairs from the Brown corpus. 4 Direct objects were identified using a set of heuristics to extract only the surface object of the verb. Verb inflections were mapped down to the base form and plural nouns mapped down to singular. 5 For example, the sentence John ate two shiny red apples would yield the pair (eat, apple). The sentence These are the apples that John ate would not provide a pair for eat, since apple does not appear as its surface object.</Paragraph> <Paragraph position="1"> Given each verb, v, the &quot;prototypical&quot; object class was found by conducting a best-first search upwards in the WordNet noun hierarchy, starting with Word-Net classes containing members that appeared as objects of the verb. Each WordNet class w considered was evaluated by calculating A(v, {n E Afln E words(w)}). Classes having too low a count (fewer than five occurrences with the verb) were excluded from consideration.</Paragraph> <Paragraph position="2"> The results of this experiment are encouraging.</Paragraph> <Paragraph position="3"> Table 2 shows the object classes discovered for the verb drink (compare to Table 1), and Table 3 the highest-scoring object classes for several other verbs.</Paragraph> <Paragraph position="4"> Recall from the definition in Section 3.2 that each WordNet class w in the tables appears as an abbreviation for {n * A/'ln * words(w)}; for example, (intoxicant, \[alcohol .... \]) appears as an abbreviation for {whisky, cognac, wine, beer}.</Paragraph> <Paragraph position="5"> As a consequence of this definition, noun classes that are &quot;too small&quot; or &quot;too large&quot; to be coherent are excluded, and the problem of search through an exponentially large space of classes is reduced to search within the WordNet hierarchy. 3</Paragraph> </Section> <Section position="5" start_page="327" end_page="328" type="metho"> <SectionTitle> 5 Acquisition of Verb Properties </SectionTitle> <Paragraph position="0"> More work is needed to improve the performance of the technique proposed here. At the same time, the ability to approximate a lexical/conceptual classification of nouns opens up a number of possible applications in lexical acquisition. What such applications have in common is the use of lexical associations as a window into semantic relationships. The technique described in this paper provides a new, hierarchical proper names were mapped to the token pname, a subclass of classes (someone, \[person\] ) and (location, \[location\] ). (quest ion, \[question .... \] } someone, \[person .... \] } stair, \[step .... \] I I repast, \[repast .... \] ) cord, \[cord .... \] } (beverage, \[beverage .... \] } <nutrient, \[food .... \] } <sensory-faculty, \[sense .... \] } (part, \[daaracter .... \]) <liquid, \[liquid .... \] } (cover, \[coverin~ .... l} (button, \[button .... \] <writt en-mat eriai, \[writ in~ .... \] } (xusic, \[~ic .... \]) source of semantic knowledge for statistical applications. This section briefly discusses one area where this kind of knowledge might be exploited. Diathesis alternations are variations in the way that a verb syntactically expresses its arguments \[Levin, 1989\]. For example, l(a,b) shows an instance of the indefinite object alternation, and 2(a,b) shows an instance of the causative/inchoative alter- null nation.</Paragraph> <Paragraph position="1"> 1 a. John ate lunch.</Paragraph> <Paragraph position="2"> b. John ate.</Paragraph> <Paragraph position="3"> 2 a. John opened the door.</Paragraph> <Paragraph position="4"> b. The door opened.</Paragraph> <Paragraph position="5"> Such phenomena are of particular interest in the study of how children learn the semantic and syntactic properties of verbs, because they stand at the border of syntax and lexical semantics. There are numerous possible explanations for why verbs fall into particular classes of alternations, ranging from shared semantic properties of verbs within a class, to pragmatic factors, to &quot;lexieal idiosyncracy.&quot; Statistical techniques like the one described in this paper may be useful in investigating relationships between verbs and their arguments, with the goal of contributing data to the study of diathesis alternations, and, ideally, in constructing a computational model of verb acquisition. For example, in the experiment described in Section 4, the verbs participating in &quot;implicit object&quot; alternations 6 appear to have higher association scores with their &quot;prototypical&quot; object classes than verbs for which implicit objects are disallowed. Preliminary results, in fact, show a statistically significant difference between the two groups. eThe indefinite object alternation \[Levin, 1989\] and the specified object alternation \[Cote, 1992\]. Might such shared information-theoretic properties of verbs play a role in their acquisition, in the same way that shared semantic properties might? On a related topic, Grim_shaw has recently suggested that the syntactic bootstrapping hypothesis for verb acquisition \[Gleitman, 1991\] be extended in such a way that alternations such as the causative/inchoative alternation (e.g. 2(a,b)) are learned using class information about the observed subjects and objects of the verb, in addition to sub-categorization information. 7 I hope to extend the work on verb/object associations described here to other arguments of the verb in order to explore this suggestion.</Paragraph> </Section> class="xml-element"></Paper>