File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/80/p80-1041_metho.xml

Size: 7,448 bytes

Last Modified: 2025-10-06 14:11:25

<?xml version="1.0" standalone="yes"?>
<Paper uid="P80-1041">
  <Title>Develop a Computational Methodology for Deriving Natural Language Semantic Struc-</Title>
  <Section position="1" start_page="0" end_page="153" type="metho">
    <SectionTitle>
REQUIREMENTS OF TEXT PROCESSING LEXICONS
Kenneth C. Litkoweki
16729 Shea Lane, Gaithersburg, Md. 20760
</SectionTitle>
    <Paragraph position="0"> Five years ago, Dwight Bolinger \[1\] wrote that efforts to represent meaning had not yet made use of the insights of lexicography. The few substantial efforts, such as those spearheaded by Olney \[2,3\], MelOCuk \[4\], Smith \[5\], and Simmons \[6,7\], made some progress, but never came to fruition. Today, lexicography and its products, the dictionaries, remain an untapped resource of uncertain value. Indeed, many who have analyzed the contents of a dictionary have concluded that it is of little value to linguistics or artificial intelligence. Because of the size and complexity of a dictionary, perhaps such a conclusion is inevitable, but I believe it is wrong. To avoid becoming irretrievably lost in the minutiae of a dictionary and to view the real potential of this resource, it is necessary to develop a comprehensive model within which a dictionaryOs detail can be tied together. When this is done, I believe one can identify the requirements for a semantic representation of an entry in the lexicon to be used in natural language processing systems. I describe herein what I have learned from this type of effort.</Paragraph>
    <Paragraph position="1"> I began with the objective of identifying primitive words or concepts by following definitional paths within a dictionary. To search for these, I developed a model of a dictionary using the theory of labeled directed graphs. In this model, a point or node is taken to represent a definition and a line or arc is taken to represent a derivational relationship between definitions. With such a model, I could use theorems of graph theory to predict the existence and form of primitives within the dictionary. This justified continued effort to attempt to find such primitives.</Paragraph>
    <Paragraph position="2"> The model showed that the big problem to be overcome in trying to find the primitives is the apparent rampant circularity of defining relationships. To eliminate these apparent vicious circles, it is necessary to make a precise identification of derivational relationships, specifically, to find the specific definition that provides the sense in which its definiendum is used in defining another word. When this is done, the spurious cycles are broken and precise derivational relationships are identified. Although this can be done manually, the sheer bulk of a dictionary requires that it be done with well-defined procedures, i.e. with a syntactic and semantic parser. It is in the attempt to lay out the elements of such a parser that the requirements of semantic representations have emerged.</Paragraph>
    <Paragraph position="3"> The parser must first be capable of handling the syntactic complexity of the definitions within a dictionary. This can be done by modifying and adding to existing ATN parsers, based on syntactic patterns present within a dictionary. Incidentally, a dictionary is an excellent large corpus upon which to base such a parser.</Paragraph>
    <Paragraph position="4"> The parser must go beyond syntactics, i.e., it must be capable of identifying which sense of a word is being used. Rieger \[8,9\] has argued for the necessity of sense selection or discrimination nets. To develop such a net for each word in the lexicon, I suggest the possibility of using a parser to analyze the definitions of a word and thereby to create a net which will be capable of discriminating among all definitions of a word.</Paragraph>
    <Paragraph position="5"> The following requirements must be satisfied by such a parser and its resulting nets.</Paragraph>
    <Paragraph position="6"> Diagnostic or differentiating components are needed for each definition. Each definition must have a different semantic re~resentation, even though there may be a core meaning for all the definitions of a word. Since the ability to traverse a net successfully depends on the context in which a word is used, each definition, i.e. each semantic representation, must include slots to be filled b~ that context. The slots will provide a unique context for each sense of a word. Context is what permits disambiguation.</Paragraph>
    <Paragraph position="7"> Since the search through a net is inherently complex, a definition must drive the parser in the search for context which will fill its slots. These notions are consistent with RiegerOs; however, they were identified independently based on my analysis of dictionary definitions. Their viability depends on the ability to describe procedures for developing a parser of this type to generate the desired semantic representations.</Paragraph>
    <Paragraph position="8"> AS mentioned before, observation of syntactic patterns will lead to an enhancement of syntactic parsingl to a limited extent, the syntactic parser will permit some discrimination, e.g. of transitive and intransitive verbs or verbs which use particles. Further procedures for developing semantic representations are described using the intransitive senses of the verb mchange&amp;quot; as examples. Procedures are described for (I) using definitions of prepositions for identifying semantic cases which will operate as slots in the semantic representation, (2) showing how selectional restrictions on what can fill such slots are derived from the definitional matter, and (3) identifying semantic components that are present within a definition. It is pointed out how it will eventually be necessary that these representations be given in terms of primitives. Procedures are described for building discrimination nets from the results of parsing the definitions and for adding to these nets how the parser should be driven.</Paragraph>
    <Paragraph position="9"> The emphasis of this paper is in describing procedures that have been developed thus far.</Paragraph>
    <Paragraph position="10"> Finally, it is shown how these procedures are used to identify explicit derivational relationships present within a dictionary in order to move toward identification of primitives.</Paragraph>
    <Paragraph position="11"> Such relationships are very similar to the lexical functions used by NelOCuk, except that in this case both the function and the argument are elements of the lexicon, rather than the argument alone.</Paragraph>
    <Paragraph position="12">  It has become clear that semantic representations of definitions in the form described must ultimately constitute the elements out of which semantic rapresentatlons of multi-sentence texts must be created, perhaps with twO fool: (I) describing entities (cantered around nouns) and (2) describing events (centered around verbs). If multisentence texts can then be studied empirically, the structure of ordinary discourse will then be based on observations rather than theory.</Paragraph>
    <Paragraph position="13"> Although this paradigm may seem to be incredibly complex, I believe that it is nothing more than what the lexicons of present AI systems are becoming. I believe that more rapid progress can be made with an explicit effort to exploit and not to duplicate ~he efforts of lexicographers.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML