File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/06/w06-2610_intro.xml

Size: 3,361 bytes

Last Modified: 2025-10-06 14:04:05

<?xml version="1.0" standalone="yes"?>
<Paper uid="W06-2610">
  <Title>An Ontology-Based Approach to Disambiguation of Semantic Relations</Title>
  <Section position="2" start_page="0" end_page="0" type="intro">
    <SectionTitle>
1 Introduction
</SectionTitle>
    <Paragraph position="0"> What we describe in this paper, which we refer to as relation disambiguation, is in some sense similar to word sense disambiguation. In traditional word sense disambiguation the objective is to associate a distinguishable sense with a given word (Ide and V'eronis, 1998). It is not a novel idea to use machine learning in connection with traditional word sense disambiguation, and as such it is not a novel idea to include some kind of generalization of the concept that a word expresses in the learning task either (Yarowsky, 1992). Other projects have used light-weight ontologies such as WordNet in this kind of learning task (Voorhees, 1993; Agirre and Martinez, 2001). What we believe is our contributionwiththisworkisthefactthatweattemptto  learncomplexconceptsthatconsistoftwosimplerconcepts, and the relation that holds between them. Thus, we start out with the knowledge that some relation holds between two concepts, which we could express as REL(concept1,concept2), and what we aim at being able to do is to fill in a more specific relation type than the generic REL, and get e.g. POF(concept1,concept2) in the case where a preposition expresses a partitive relation. This makes it e.g. possible to determine from the sentence &amp;quot;France is in Europe&amp;quot; that France is a part of Europe. As in word sense disambiguation we here presuppose a finite and minimal set of relations, which is described in greater detail in section 2.</Paragraph>
    <Paragraph position="1"> The ability to identify these complex structures in text, can facilitate a more content based information retrieval as opposed to more traditional search engines, where the information retrieval relies more or less exclusively on keyword recognition. In the OntoQuery project1, pertinent text segments are retrieved based on the conceptual content of the search phrase as well as the text segments (Andreasen et al., 2002; Andreasen et al., 2004). Concepts are here identified through their corresponding surface form (noun phrases), and mapped into the ontology. As a result, we come from a flat structure in a text to a graph structure, which describes theconceptsthatarereferredtoinagiventextsegment, in relation to each other.</Paragraph>
    <Paragraph position="2"> However, at the moment the ontology is strictly a subsumption-based hierarchy and, further, only relatively simple noun phrases are recognized and mapped into the ontology. The work presented here expands this scope by including other semantic relations between noun phrases. Our first experiments in this direction have been an analysis of prepositions with surrounding noun phrases (NPs). Our aim is to show that there is an affinity between the ontological types of the NP-heads and the relation that the preposition denotes, which can be used to represent the text as a complex semantic structure, as opposed to simply running text.</Paragraph>
    <Paragraph position="3"> The approach to showing this has been to annotate a corpus and use standard machine learning methods on this corpus.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML