File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/00/p00-1045_concl.xml
Size: 2,547 bytes
Last Modified: 2025-10-06 13:52:49
<?xml version="1.0" standalone="yes"?> <Paper uid="P00-1045"> <Title>Memory-E cient and Thread-Safe Quasi-Destructive Graph Uni cation</Title> <Section position="7" start_page="7" end_page="7" type="concl"> <SectionTitle> 6 Conclusions </SectionTitle> <Paragraph position="0"> We have presented a technique to reduce memory usage by separating scratch elds from nodes. We showed that compressing node structures can further reduce the memory footprint. Although these techniques require extra computation, the algorithms still run faster. The main reason for this was the di erence between cache and memory speed.</Paragraph> <Paragraph position="1"> As current developments indicate that this di erence will only get larger, this e ect is not just an artifact of the current architectures.</Paragraph> <Paragraph position="2"> We showed how to incoporate data-structure sharing. For our grammar, the additional constraint for sharing did not pose a problem. If it does pose a problem, there are several techniques to mitigate its e ect.</Paragraph> <Paragraph position="3"> For example, one could reserve additional indexes at critical positions in a subgraph (e.g. based on type information). These can then be assigned to nodes in later uni cations without introducing con icts elsewhere. Another technique is to include a tiny table with repair information in each share arc to allow a small number of con icts to be resolved.</Paragraph> <Paragraph position="4"> For certain grammars, data-structure sharing can also signi cantly reduce execution times, because the equality check (see line 3 of Unify1) can intercept shared nodes with the same address more frequently. We did not exploit this bene t, but rather included an o set check to allow grammar nodes to be shared as well. One could still choose, however, not to share grammar nodes.</Paragraph> <Paragraph position="5"> Finally, we introduced deferred copying.</Paragraph> <Paragraph position="6"> Although this technique did not improve performance, we suspect that it might be bene cial for systems that use more expensive memory allocation and deallocation models (like garbage collection).</Paragraph> <Paragraph position="7"> Since memory consumption is a major concern with many of the current uni cation-based grammar parsers, our approach provides a fast and memory-e cient alternative to Tomabechi's algorithm. In addition, we showed that our algorithm is well suited for concurrent uni cation, allowing to reduce execution times as well.</Paragraph> </Section> class="xml-element"></Paper>