File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/80/p80-1004_concl.xml

Size: 5,889 bytes

Last Modified: 2025-10-06 13:55:57

<?xml version="1.0" standalone="yes"?>
<Paper uid="P80-1004">
  <Title>Metaphor - A Key to Extensible Semantic Analysis</Title>
  <Section position="6" start_page="19" end_page="19" type="concl">
    <SectionTitle>
6. Freezing and Packaging Metaphors
</SectionTitle>
    <Paragraph position="0"> We have seen how the recognition of basic general metaphors greatly structures and facilitates the understanding process. However, there are many problems in understanding metaphors and analogies that we have not yet addressed. For instance, we have said little about explicit analogies found in text. I believe the computational process used in understanding analogies to be the same as that used in understanding metaphors, The difference is one of recognition and universality of acceptance in the underlying mappings. That is, an analogy makes the basic mapping explicit (sometimes the additional transfer maps are also detailed), whereas in a metaphor the mapping must be recognized (or reconstructed) by the understander.</Paragraph>
    <Paragraph position="1"> However, the general metaphor mappings are already known to the understander - he need only recognize them and instantiate them. Analogical mappings are usually new mappings, not necessarily known to the understander.</Paragraph>
    <Paragraph position="2"> Therefore, such mappings must be spelled out (in establishing the analogy) before they can be used. If a maDDing is often used as an analogy it may become an accepted metaphor; the explanatory recluirement is Suppressed if the speaker believes his listener has become familiar with the maDDing.</Paragraph>
    <Paragraph position="3"> This suggests one method of learning new metaphors. A maDDing abstracted from the interpretation of several analogies can become packaged into a metaphor definition.</Paragraph>
    <Paragraph position="4"> The corTesDonding subparts of the analogy will form the transfer map, if they are consistent across the various analogy instances. The recognition network can be formed by noting the specific semantic features whose presence was required each time the analogy was stated and those that were necessarily refered to after the statement of the analogy. The most difficult Dart to learn is the intentional component. The understander would need to know or have inferred the writer's intentions at the time he expressed the analogy.</Paragraph>
    <Paragraph position="5"> Two other issues we have not yet addressed are: Not all metaphors are instantiations of a small set of generalized metaphor mappings. Many metaphors appear to become frozen in the language, either packaged into phrases with fixed meaning (e.g., &amp;quot;prices are going through the roof&amp;quot;, an instance of the more-is-up metaphor), or more specialized entities than the generalized mappings, but not as specific as fixed phrases. I set the former issue aside remarkino that if a small set of general constructs can account for the bulk of a complex phenomenon, then they merit an in-depth investigation. Other metaphors may simpty be less-often encountered mappings. The latter issue, however, requires further discussion.</Paragraph>
    <Paragraph position="6"> I propose that typical instantiations of generalized metaphors be recognized and remembered as part of the metaphor interpretation process. These instantiations will serve to grow a hierarchy of often.encountered metaphorical mappings from the top down. That is, typical specializations of generalized metaphors are stored in a specialization hierarchy (similar to a semantic network, with ISA inheritance pointers to the generalized concept of which they are specializations). These typical instanceS can in turn spawn more specific instantiations (if encountered with sufficient frequency in the language analysis), and the process can continue until until the fixed-phrase level is reached. Clearly. growing all possible specializations of a generalized maDDing is prohibitive in space, and the vast majority of the specializations thus generated would never be encountered in processing language. The sparseness of typical instantiations is the key to saving space. Only those instantiations of more general me. ~ohors that are repeatedly encountered are assimilated into t, Je hieraruhy. Moreover, the number or frequency of reclui=ed instances before assimilation takes place is a parameter that can be set according to the requirements of the system builder (or user). In this fashion, commonly-encountered metaphors will be recognized and understood much faster than more obscure instantiations of the general metaphors.</Paragraph>
    <Paragraph position="7"> It is important to note that creating new instantiations of more general mappings is a much simpler process than generalizing existing concepts. Therefore, this type of specialization-based learning ought to be Quite tractable with current technology.</Paragraph>
    <Paragraph position="8"> 7. Wrapping Up The ideas described in this paper have not yet been implemented in a functioning computer system. I hope to start incorpor,3ting them into the POLITICS parser \[2\], which is modelled after Riesbeck's rule.based ELI \[8\].</Paragraph>
    <Paragraph position="9"> The philosophy underlying this work is that Computational Linguistics and Artificial Intelligence can take full advantage of - not merely tolerate or circumvent - metaphors used extensively in natural language, in case the reader is still in doubt about the necessity to analyze metaphor as an integral Dart of any comprehensive natural language system, I point out that that there are over 100 metaphors in the above text, not counting the examples. To illustrate further the ubiquity of metaphor and the difficulty we sometimes have in realizing its presence, I note that each section header and the title of this PaDer contain undeniable metaphors.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML