File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/91/w91-0201_metho.xml

Size: 11,254 bytes

Last Modified: 2025-10-06 14:12:50

<?xml version="1.0" standalone="yes"?>
<Paper uid="W91-0201">
  <Title>Knowledge represen tation and knowledge of words*</Title>
  <Section position="4" start_page="0" end_page="4" type="metho">
    <SectionTitle>
5. Polysemy and context
</SectionTitle>
    <Paragraph position="0"> It is encouraging to be able to point to an area of normal research at the interface of lexical semantics and knowledge representation, but at the same time it would be very misleading to imagine that all the problems of word meaning can be solved by nonmonotonic logic, or that the potential areas of partnership are all tidy and unproblematic.</Paragraph>
    <Paragraph position="1"> In a number of papers published over the last twenty years, John McCarthy has claimed that a logical foundation for common sense reasoning should include not only a theory of nonmonotonic reasoning, but a theory of conte~ct. 6 It is easy to see how an account of context is central in approaching reasoning tasks of great or even moderate complexity. It is essential to avoid being swamped in irrelevant detail. But if details are ignored, it is also essential to ignore them intelligently, so that the reasoning will retain appropriateness. Engaged reasoning is located in a local context which makes it focused and feasible, but nevertheless retains its applicability to the larger context of which the current context is part.</Paragraph>
    <Paragraph position="2">  Reiter argued that common sense reasoning might not need to explicate causality; it may be as unimportant in the common sense world as it seems to be in modern physical theories. The ubiquitous presence of causal notions in processes of word formation is a strong argument against such a position. SThe need for a theory of context was mentioned in McCarthy's 1971 Turing Award address; see \[McCarthy 87\] for a revised version. A recent attack on the problem can be found in \[McCarthy 89\].  But there is a hierarchy here. Contextualization must also be controlled by reasoning processes, which themselves may well be located in contexts. Thus, contexts can have greater or lesser generality, and some representation of context must also be available to reasoners.</Paragraph>
    <Paragraph position="3"> Though--if McCarthy is right--we may not yet have a satisfactory theory of context, which could be incorporated into a logicist framework, we do have many applications.</Paragraph>
    <Paragraph position="4"> Object oriented approaches to programming, in particular, achieve their power through catering to the human need for contextual organization of reasoning; they could equally well be called context oriented approaches* Many of the most difficult problems of involving the meanings of Words have to do with the variability of interpretation. In his experiments on the vagueness of terms, for * . .* instance, William Labov noticed that the &amp;stmction between 'cup' and 'bowl' was affected more by whether the interpreter was situated in a &amp;quot;coffee&amp;quot; or a &amp;quot;mashed potatoes&amp;quot; context than by factors such as the ratio of height to diameter of the artifact. 7 To take another example, there is some reason to think that in a context where a bus is leaving for a banquet, 'go' can mean 'go on the bus to the banquet'. Of course, if someone says  (4) I'm going.</Paragraph>
    <Paragraph position="5"> in such a context, it means 'I'm going on the bus to the banquet', but this effect could be attributed to the speaker meaning of the utterance, without assigning any special interpretation to 'go'. More telling is the fact that in this case it's possible to say (5) No, I'm not going; I'm taking my car.</Paragraph>
    <Paragraph position="6">  Some of the problems of polysemy that Sowa discusses in his contribution to this workshop and in other writings are best regarded, I think, as cases in which the procedures for interpreting words are adjusted to context. Unfortunately, this is an area in which we seem to have many alternative ways of accounting for the phenomena: vagueness, ambiguity, strategies of interpreting speaker meaning, and contextual effects. All these accounts are plausible, and each is best equipped to deal with some sorts of examples. But in many cases there is no clear way to pick tile best account. Perhaps this problem should be solved not by treating the accounts as competitors and seeking more refined linguistic tests, but by providing bridges between one solution and the other; chunking, for instance, provides in many cases a plausible path from conversational implicature to a lexicalized word sense.</Paragraph>
    <Paragraph position="7"> I have stressed the contextual approach to polysemy because it seems to me to offer more hope for progress than other ways of looking at the problem. It enables us to draw on a variety of computational approaches, such as object oriented programming, and it opens possibilities of collaboration with theoreticians who, influenced by McCarthy, are looking for formal ways of modeling contextuality. The ongoing work of theory development badly needs examples and intuitions; language in general and the lexicon in particular are probably the most promising source of these.</Paragraph>
  </Section>
  <Section position="5" start_page="4" end_page="6" type="metho">
    <SectionTitle>
6. Linguistic work
</SectionTitle>
    <Paragraph position="0"> VSee \[Labov 73\].</Paragraph>
    <Paragraph position="1"> Of course, most of the recent linguistic research on word meaning has been done by nonlogicists. See \[Levin 85\], for instance, for a useful survey of work in the Government-Binding framework.</Paragraph>
    <Paragraph position="2"> There is no substitute for the broad empirical work being done by linguists in this area. But as Levin's survey makes clear, it is very difficult to develop a theoretical apparatus that is well grounded in linguistic evidence in this area. Despite the efforts of many well trained linguists to devise good general tests for important notions like agency, the connection of these concepts to the evidence remains very problematic.</Paragraph>
    <Paragraph position="3"> Despite difficulties with the high level concepts, the linguistic work has uncovered much taxonomic information that is relatively general across languages, and that evidently classifies words not only into categories that pattern similarly, but that share important semantic features.</Paragraph>
    <Paragraph position="4"> This, too, seems to be an area in which cooperation between linguists and the AI community might be fruitful. The classification schemes that come from linguistics are not only well motivated, but should be very useful in organizing lexical information on inheritance principles. Moreover, it might well be useful for linguists who are grappling with methodological difficulties to learn to think of their problems along knowledge engineering lines rather than syntactic ones.</Paragraph>
    <Paragraph position="5"> 7. Linguistics and knowledge representation Representation is crucial in contemporary linguistics, and is found in all the areas where linguistic structure is important. But syntax seems to be the primary source of representational ideas and methods for justifying them. For over thirty years, syntacticians have proposed formalisms (which in general are variations on labeled trees, representing phrase structure), along with rules for systematically generating them. They have also developed methods for justifying these formalisms, based mainly on introspective evidence about grammaticality, and an extremely rich battery of techniques for bringing this evidence to bear on hypotheses.</Paragraph>
    <Paragraph position="6"> Though (except in some cases where natural language processing systems are integrated with the formalism), these representation systems are tested by introspective evidence, and their connection to experiments and to cognitive psychology is in fact tenuous and problematic, many linguists make cognitive claims for their representations.</Paragraph>
    <Paragraph position="7"> The hope seems to be that eventually the structures that are well supported by the introspective methods will be eventually be validated by a larger psychological theory of processing that is well supported by experimental evidence.</Paragraph>
    <Paragraph position="8"> Whether or not such a theory is eventually forthcoming, the current methods used to support different syntactic theories often seem to leave no way of settling even quite major issues. And when these methods are extended to semantics, they definitely seem to leave theoretical alternatives underconstrained by the available methodology of linguistic argumentation. Intuitions about meaning are even more problematic than those about grammaticality. Even though grammaticality is a fairly refined notion, and subject to contextual factors that are difficult to determine, it seems to be easier to agree about grammaticality judgments than about, for instance, judgments about ambiguity.</Paragraph>
    <Paragraph position="9"> The criteria that have emerged in knowledge representation seem to me to be well worth considering in this respect, tlere are some considerations.</Paragraph>
    <Paragraph position="11"> The criteria are stringent--so stringent, in fact, that, in view of conflict between desirable features such as expressivity and tractability, there really are no general-purpose knowledge representation schemes meeting them all.</Paragraph>
    <Paragraph position="12"> The criteria of knowledge representation can be added without much violence to the ones already imposed by linguistic theorists. In fact, the need for usability-assuming that the users are linguists--would require the use of representations that make linguistic sense. No special cognitive claims need to be made. The point is that, though it can be debated whether a generally accepted linguistic formalism is adequate as a representation of human cognition, there is no doubt--if it's generally accepted--that it is a useful way of displaying linguists' insights into linguistic structure.</Paragraph>
    <Paragraph position="13"> It often is necessary in linguistics to represent large amounts of information. As lexicography becomes computerized, and the need is felt to connect these computerized linguistic knowledge bases to areas of linguistic theory such as syntax, a novel criterion emerges--does the theory allow a workable way of organizing large amounts of lexical information? The need to associate knowledge management procedures with representations also provides new constraints, and--if the procedures can be implemented--may also help to automate the testing process. It is hard to see, for instance, whether a semantic theory can be tested as a mere theory of representation. Since the main purpose of semantic representation is to provide a level at which sound inference can take place, an explicit specification of the associated inference procedures is needed before we can begin to test the theory.</Paragraph>
    <Paragraph position="14"> There are many similarities of detail that make it easy to build smooth bridges between linguistic formalisms and ones from knowledge representation.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML