File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/92/c92-2121_concl.xml
Size: 3,309 bytes
Last Modified: 2025-10-06 13:56:50
<?xml version="1.0" standalone="yes"?> <Paper uid="C92-2121"> <Title>Semantic Network Array Processor as a Massively Parallel Computing Platform for High Performance and Large-Scale Natural Language Processing*</Title> <Section position="8" start_page="0" end_page="0" type="concl"> <SectionTitle> 7. Conclusion </SectionTitle> <Paragraph position="0"> In this paper, we have demonstrated that semantic network array processor (SNAP) speeds up various natural language processing tasks. We have demonatrated this fact using three examples: the memory-based parsing, VLKB processing, and Classification-based parsing.</Paragraph> <Paragraph position="1"> in the memory-based parsing approach, we have attained the speed of parsing in the order of milliseconds without making substantial compromises in linglfistic analysis. To the contrary, our model is superior to other traditional natural language processing models in several aspects, particularly, contextual processing. Next, we have applied the SNAP architecture for a new classification-based parsing model ltere, SNAP iB used to search tile MSS to test tile unifiability of the two feature graphs. We have attained, again, sub + milliseconds order performance per uniflability test.</Paragraph> <Paragraph position="2"> In addition, this approach exhibited desirable scalability characteristics. The search time asymptotically researches to 450 cycles as the size of classification network increases. Also, search tinm decreases as average fan-out gets larger* Thee are natural advantages of using parallel machines* SNAP is not only useful for the new and radical approach, but also beneficial in speeding up traditional NLP systems such as KBMT. We have evaluated the performance to search VLKB which is the major knowledge source for the KBMT system. We have attained sub-milliseconds order performance per a search. Traditionally, on the serial machines, this process has been taking a few seconds posing the major thread to performance on the scaled up systems.</Paragraph> <Paragraph position="3"> Also, there are many other NLP models (Typed Unification Grammar \[Emele and Zajae, 1990\], SNePS \[Neal and Shapiro, 1987\], and others) which may exhibit high performance and desirable scaling property on SNAP.</Paragraph> <Paragraph position="4"> Currently, we are designing the SNAP-2 reflecting various findings made by the research with SNAP-1.</Paragraph> <Paragraph position="5"> SNAP-2 will be built upon the state-of-the-art VLSI technologies using RISC architecture. At least 32K virtual nodes will be supported by each processing element, providing the system with a minimum of 16 million nodes* SNAP-2 will feature nmlti-user supports, intelligent I/O, etc. One of the significant features in SNAP-2 is the introduction of a programmable marker propagation rules. This feature allows users to define their own and more sophisticated marker propagation rules.</Paragraph> <Paragraph position="6"> In summary, we have shown that the SNAP architecture can be a useful development platform for high performance and large-scale natural language processrag. This has been empirically demonstrated using SNAP-1. SNAP-2 is expected to explore opportunities of massively parallel natural language processing.</Paragraph> </Section> class="xml-element"></Paper>