File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/91/p91-1041_metho.xml
Size: 4,586 bytes
Last Modified: 2025-10-06 14:12:50
<?xml version="1.0" standalone="yes"?> <Paper uid="P91-1041"> <Title>Quasi-Destructive Graph Unification</Title> <Section position="4" start_page="318" end_page="319" type="metho"> <SectionTitle> QUASI-DESTRUCTIVE COPYING \] FUNCTION copy-dg-with-comp-arcs(dg-undere0; </SectionTitle> <Paragraph position="0"/> <Paragraph position="2"> return a new arc with label and value; Figure 4 shows a simple example of quasi-destructive graph unification with dg2 convergent arcs. The round nodes indicate atomic nodes and the rectangular nodes indicate bottom (variable) nodes. First, top-level unifyl finds that each of the input graphs has arc-a and arc-b (shared). Then unifyl is recursively called. At step two, the recursion into arc-a locally succeeds, and a temporary forwarding link with timestamp(n) is made from node \[-\]2 to node s. At the third step (recursion into arc-b), by the previous forwarding, node f12 already has the value s (by dereferencing).</Paragraph> <Paragraph position="3"> Then this unification returns a success and a temporary forwarding link with time-stamp(n) is created from an immediate return to unify.dg. 17I.e., the existing copy of the node.</Paragraph> <Paragraph position="4"> lSCreates an empty node structure.</Paragraph> <Paragraph position="5"> node \[-\] 1 to node s. At the fourth step, since all recursive unifications (unifyls) into shared arcs succeeded, top-level unifyl creates a temporary forwarding link with time-stamp(n) from dag2's root node to dagl's root node, and sets arc-c (new) into comp-arc-list of dagl and returns success ('*T*). At the fifth step, a copy of dagl is created respecting the content of comp-arc-list and dereferencing the valid forward links. This copy is returned as a result of unification. At the last step (step six), the global timing counter is incremented (n =:, n+ 1). After this operation, temporary forwarding links and comp-arc-lists with time-stamp (< n+l) will be ignored. Therefore, the original dagl and dag2 are recovered in a constant time without a costly reversing operations. (Also, note that recursions into shared-arcs can be done in any order producing the same result).</Paragraph> <Paragraph position="7"> copy. of dagl (n) dag~dag2 As we just saw, the algorithm itself is simple. The basic control structure of the unification is similar to Pereira's and Wroblewski's unifyl. The essential difference between our unifyl and the previous ones is that our unifyl is non-destructive. It is because the complementarcs(dg2,dgl) are set to the comp-arc-list of dgl and not into the are-list of dgl. Thus, as soon as we increment the global counter, the changes made to dgl (i.e., addition of complement arcs into compare-list) vanish. As long as the comp-arc-mark value matches that of the global counter the content of the comp-arc-list can be considered a part of arc-list and therefore, dgl is the result of unification. Hence the name quasi-destructive graph unification. In order to create a copy for subsequent use we only need to make a copy of dgl before we increment the global counter while respecting the content of the comp-arc-list of dgl.</Paragraph> <Paragraph position="8"> Thus instead of calling other unification functions (such as unify2 of Wroblewski) for incrementally ereating a copy node during a unification, we only need to create a copy after unification. Thus, if unification fails no copies are made at all (as in \[Karttunen, 1986\]'s scheme). Because unification that recurses into shared ares carries no burden of incremental copying (i.e., it simply checks if nodes are compatible), as the depth of unification increases (i.e., the graph gets larger) the speed-up of our method should get conspicuous if a unification eventually fails. If all unifications during a parse are going to be successful, our algorithm should be as fast as or slightly slower than Wroblewski's algorithm 19. Since a parse that does not fail on a single unification is unrealistic, the gain from our scheme should depend on the amount of unification failures that occur during a unification. As the number of failures per parse increases and the graphs that failed get larger, the speed-up from our algorithm should become more apparent. Therefore, the characteristics of our algorithm seem desirable. In the next section, we will see the actual results of experiments which compare our unification algorithm to Wroblewski's algorithm (slightly modified to handle variables and cycles that are required by our HPSG based grammar).</Paragraph> </Section> class="xml-element"></Paper>