File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/evalu/89/e89-1010_evalu.xml

Size: 6,542 bytes

Last Modified: 2025-10-06 14:00:01

<?xml version="1.0" standalone="yes"?>
<Paper uid="E89-1010">
  <Title>Ambiguity Resolution in the DMTRANS PLUS</Title>
  <Section position="7" start_page="0" end_page="0" type="evalu">
    <SectionTitle>
5 Discussion:
</SectionTitle>
    <Paragraph position="0"/>
    <Section position="1" start_page="0" end_page="0" type="sub_section">
      <SectionTitle>
5.1 Global Minima
</SectionTitle>
      <Paragraph position="0"> The correct hypothesis in our model is the hypothesis with the least cost. This corresponds to the notion of global minima in most connectionist literature. On other hand, the hypothesis which has the least cost within a local scope but does not have the least cost when it is combined with global context is a local minimum. The goal of our model is to find a global minimum hypothesis in a given context. This idea is advantageous for discourse processing because a parse which may not be preferred in a local context may yeild a least cost hypothesis in the global context. Similarly, the least costing parse may turn out to be costly at the end of processing due to some contexual inference triggered by some higher context.</Paragraph>
      <Paragraph position="1"> One advantage of our system is that it is possible to define global and local minima using massively parallel marking passing, which is computationally efficient and is more powerful in high-level processing involving variable-binding, structure building, and constraint propagations 7 than neural network models. In addition, our model is suitable for massively parallel architectures which are now being researched by hardware designers as next generation machines s.</Paragraph>
    </Section>
    <Section position="2" start_page="0" end_page="0" type="sub_section">
      <SectionTitle>
5.2 Psycholinguistic Relevance of the
Model
</SectionTitle>
      <Paragraph position="0"> The phenomenon of lexical ambiguity has been studied by many psycholinguistic researchers including \[13\], \[3\], and \[17\]. These studies have identified contextual priming as an important factor in ambiguity resolution.</Paragraph>
      <Paragraph position="1"> One psycholinguistic study that is particularly relevent to DMTRANS PLUS is Crain and Steedman \[4\], which argues for the principle of referential success. Their experiments demonstrate that people prefer the interpretation which is most plausible and accesses previously defined discourse entities. This psycholinguistic claim and experimental result was incorporated in our model by adding costs for instance creation and constraint satisfaction.</Paragraph>
      <Paragraph position="2"> Another study relevent to our model is be the lexical preference theory by Ford, Bresnan and Kaplan \[5\]. Lexical preference theory assumes a preference order among lexical entries of verbs which differ in subcategorization for prepositional phrases. This type of preference was incorporated as the bias term in our cost equation.</Paragraph>
      <Paragraph position="3"> 7Refer to \[22\] for details in this direction.</Paragraph>
      <Paragraph position="4"> SSee \[23\] and \[9\] for discussion.</Paragraph>
      <Paragraph position="5"> - 77 Although we have presented a basic mechanism to incorporate these psyeholinguistic theories, well controlled psycholinguistic experiments will be necessary to set values of each constant and to validate our model psycholinguistically.</Paragraph>
    </Section>
    <Section position="3" start_page="0" end_page="0" type="sub_section">
      <SectionTitle>
5.3 Reverse Cost
</SectionTitle>
      <Paragraph position="0"> In our example in the previous section, if the first sentence was Mary picked an S&amp;W where the hearer knows that an S&amp;W is a hand-gun, then an instance of 'MARY POSSES HAND-GUNI' is asserted as true in the first sentence and no cost is incurred in the interpretation of the second sentence using CSC2. This means that the cost for both PP-attachements in Mary shot the man with the handgun are the same (no cost in either cases) and the sentence remains ambiguous.</Paragraph>
      <Paragraph position="1"> This seems contrary to the fact that in Mary picked a S&amp; W. She shot the man with the hand-gun, that natural interpretation (given that the hearer knows S&amp;W is a hand-gun) seems to be that it was Mary that had the hand-gun not the man. Since our costs are only negatively charged, the fact that 'MARY1 POSSES S&amp;W' is recorded in previous sentence does not help the disambiguation of the second sentence.</Paragraph>
      <Paragraph position="2"> In order to resolve ambiguities such as this one which remain after our cost-assignment procedure has applies, we are currently working on a reverse cost charge scheme. This scheme will retroactively increase or decrease the cost of parses based on other evidence from the discourse context. For example, the discourse context might contain information that would make it more plausible or less plausible for Mary to use a handgun. We also plan to implement time-sensitive diminishing levels of charges to prefer facts recognized in later utterances.</Paragraph>
    </Section>
    <Section position="4" start_page="0" end_page="0" type="sub_section">
      <SectionTitle>
5.4 Incorporation of Connectionist Model
</SectionTitle>
      <Paragraph position="0"> As already mentioned, our model can incorporate connectionist models of ambiguity resolution. In a connectionist network activation of one node triggers interactive excitation and inhibition among nodes. Nodes which get more activated will be primed more than others. When a parse uses these more active nodes, no cost will be added to the hypothesis. On the other hand, hypotheses using less activated nodes should be assigned higher costs. There is nothing to prevent our model from integrating this idea, especially for lexical ambiguity resolution. The only reason that we do not implement a connectionist approach at present is that the computational cost will be emonomous on current computers. Readers should also be aware that DMA is a guided marker passing algorithm in which markers are passed only along certain links whereas connectionist models allow spreading of activation and inhibition virtually to any connected nodes. We hope to integrate DMA and connectionist models on a real massively parallel computer and wish to demonstrate real-time translation. One other possibility is to integrate with a connectionist network for speech recognition 9. We expect, by integrating with connectionist networks, to develop a uniform model of cost-based processing.</Paragraph>
    </Section>
  </Section>
class="xml-element"></Paper>
Download Original XML