File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/98/p98-2159_intro.xml

Size: 3,359 bytes

Last Modified: 2025-10-06 14:06:39

<?xml version="1.0" standalone="yes"?>
<Paper uid="P98-2159">
  <Title>An Efficient Parallel Substrate for Typed Feature Structures on Shared Memory Parallel Machines</Title>
  <Section position="2" start_page="0" end_page="968" type="intro">
    <SectionTitle>
1 Introduction
</SectionTitle>
    <Paragraph position="0"> The need for real-time NLP systems has been discussed for the last decade. The difficulty in implementing such a system is that people can not use sophisticated but computationally expensive methodologies. However, if we could provide an efficient tool/environment for developing parallel NLP systems, programmers would have to be less concerned about the issues related to efficiency of the system. This became possible due to recent developments of parallel machines with shared-memory architecture.</Paragraph>
    <Paragraph position="1"> We propose an efficient programming environment for developing parallel NLP systems on shared-memory parallel machines, called the Parallel Substrate for Typed Feature Structures (PSTFS). The environment is based on agentbased/object-oriented architecture. In other words, a system based on PSTFS has many computational agents running on different processors in parallel; those agents communicate with each other by using messages including TFSs. Tasks of the whole system, such as pars* This research is partially founded by the project of</Paragraph>
    <Paragraph position="3"> ing or semantic processing, are divided into several pieces which can be simultaneously computed by several agents.</Paragraph>
    <Paragraph position="4"> Several parallel NLP systems have been developed previously. But most of them have been neither efficient nor practical enough (Adriaens and Hahn, 1994). On the other hand, our PSTFS provides the following features.</Paragraph>
    <Paragraph position="5"> * An efficient communication scheme for messages including Typed Feature Structures (TFSs) (Carpenter, 1992).</Paragraph>
    <Paragraph position="6"> * Efficient treatment of TFSs by an abstract machine (Makino et al., 1998).</Paragraph>
    <Paragraph position="7"> Another possible way to develop parallel NLP systems with TFSs is to use a full concurrent logic programming language (Clark and Gregory, 1986; Ueda, 1985). However, we have observed that it is necessary to control parallelism in a flexible way to achieve high-performance.</Paragraph>
    <Paragraph position="8"> (Fixed concurrency in a logic programming language does not provide sufficient flexibility.) Our agent-based architecture is suitable for accomplishing such flexibility in parallelism.</Paragraph>
    <Paragraph position="9"> The next section discusses PSTFS from a programmers' point of view. Section 3 describes the PSTFS architecture in detail. Section 4 describes the performance of PSTFS on our HPSG parsers.</Paragraph>
    <Paragraph position="10">  (C) Values of F and R deHne C/Gontrol Agent #ame-concafe~alor-ssb When a message s.1Y=e(z) arrives, do the followings, S := CSA ~ selvo-csnJttaint(concatenate_na~e(~, ?)); return S; define #Gontrol Agent %ome.comcafe~ator When a message selw arrives, do the followings, R := O;</Paragraph>
    <Paragraph position="12"> forall z E F do create uarae-concatC/~atoT'-Rmb age~ |J~f i; N, ~= s*lve(x); i := i + 1; forellend for j := 0 to i do</Paragraph>
    <Paragraph position="14"/>
  </Section>
class="xml-element"></Paper>
Download Original XML