File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/91/p91-1041_concl.xml

Size: 2,843 bytes

Last Modified: 2025-10-06 13:56:44

<?xml version="1.0" standalone="yes"?>
<Paper uid="P91-1041">
  <Title>Quasi-Destructive Graph Unification</Title>
  <Section position="6" start_page="321" end_page="321" type="concl">
    <SectionTitle>
5. Conclusion
</SectionTitle>
    <Paragraph position="0"> The algorithm introduced in this paper runs significantly faster than Wroblewski's algorithm using Earley's parser and an HPSG based grammar developed at ATR. The gain comes from the fact that our algorithm does not create any over copies or early copies.</Paragraph>
    <Paragraph position="1"> In Wroblewski's algorithm, although over copies are essentially avoided, early copies (by our definition) are a significant problem because about 60 percent of unifications result in failure in a successful parse in our sample parses. The additional set-difference operation required for incremental copying during unify2 may also be contributing to the slower speed of Wroblewski's algorithm. Given that our sample grammar is relatively small, we would expect that the difference in the performance between the incremental copying schemes and ours will expand as the grammar size increases and both the number of failures ~ and the size of the wasted subgraphs of failed unifications become larger. Since our algorithm is essentially parallel, patallelization is one logical choice to pursue further speedup. Parallel processes can be continuously created as unifyl reeurses deeper and deeper without creating any copies by simply looking for a possible failure of the unification (and preparing for successive copying in ease unification succeeds). So far, we have completed a preliminary implementation on a shared memory parallel hardware with about 75 percent of effective parallelization rate. With the simplicity of our algorithm and the ease of implementing it (compared to both incremental copying schemes and lazy schemes), combined with the demonstrated speed of the algorithm, the algorithm could be a viable alternative to existing unification algorithms used in current ~That is, unless some new scheme for reducing excessive copying is introduced such as scucture-sharing of an unchanged shared-forest (\[Kogure, 1990\]). Even then, our criticism of the cost of delaying evaluation would still be valid. Also, although different in methodology from the way suggested by Kogure for Wroblewski's algorithm, it is possible to at~in structure-sharing of an unchanged forest in our scheme as well. We have already developed a preliminary version of such a scheme which is not discussed in this paper. Z~For example, in our large-scale speech-to-speech translation system under development, the USrate is estimated to be under 20%, i.e., over 80% of unifications are estimated to be failures.</Paragraph>
    <Paragraph position="2"> natural language systems.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML