File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/evalu/88/a88-1010_evalu.xml
Size: 5,691 bytes
Last Modified: 2025-10-06 14:00:01
<?xml version="1.0" standalone="yes"?> <Paper uid="A88-1010"> <Title>EVALUATION OF A PARALLEL CHART PARSER</Title> <Section position="8" start_page="72" end_page="74" type="evalu"> <SectionTitle> 7. Results </SectionTitle> <Paragraph position="0"> We consider first the results for the test grammar, #1, analyzing the sentence This grammar is so simple that we can readily visualize the operation of the parser and predict the general shape of the speed-up curve. At each token of the sentence, there are 10 productions which can expand X, so 10 seek tasks are added to the agenda. If 10 processors are available, all 10 tasks can be executed in parallel. Additional processors produce no further speed-up; having fewer processors requires some processors to perform several tasks, reducing the speed-up. This general behavior is borne out by the curve shown in Figure 1. Note that because the successful seek (for the production X --0 j) leads to the creation of an inactive edge for X and extension of the active edge for S, and these operations must be performed serially, the maximal parallelism is much less than 10.</Paragraph> <Paragraph position="1"> The next two figures compare the effectiveness of the two algorithms - the one with coarse-grained parallelism (only seeks as separate tasks) and the other with finer-grain parallelism (each seek and extend as a separate task). The finer-grain algorithm is able to make use of more parallelism in situations where an edge can be extended in several different ways. On the other 4For larger numbers of processors (5-8) the speed-up with the Ultracomputer was consistently below that with the simulator. This was due, we believe, to memory contention in the Ultracomputer. This contention is a prop-erty of the current bus-based prototype and would be greatly reduced in a machine using the target, network-based architecture.</Paragraph> <Paragraph position="2"> hand, it will have more scheduling overhead, since each extend operation has to be entered on and removed from the agenda. We therefore can expect the finer-grained algorithm to do better on more complex sentences, for which many different extensions of an active edge will be possible. We also expect the finer-grained algorithm to do better on grammars with restrictions, since the evaluation of the restriction substantially increases the time required to extend an edge, and so reduces in proportion the fraction of time devoted to the scheduling overhead. The expectations are confirmed by the results shown in Figures 2 and 3.</Paragraph> <Paragraph position="3"> Figure 2, which shows the results using a short sentence and grammar ~2 (without restrictions), shows that neither algorithm obtains substantial speed-up and that the fine-grained algorithm is in fact slightly worse. Figure 3, which shows the results using a long sentence and grammar ~3 (with restrictions), shows that the fine-grained algorithm is performing much better.</Paragraph> <Paragraph position="4"> The remaining three figures show speed-up results for the fine-grained algorithm for grammars 2, 3, and 4. For each figure we show the speed&quot; up for three sentences: a very short sentence (2-3 words), an intermediate one, and a long sentence (14-15 words). In all cases the graphs plot the number of processors vs. the true speed-up - the speed-up relative to the serial version of the parser. The value for 1 processor is therefore below 1, reflecting the overhead in the parallel version for enforcing mutual exclusion in access to shared data and for scheduling extend tasks.</Paragraph> <Paragraph position="5"> Grammars 2 and 3 are relatively small (38 productions) and have few constraints, in particular on adjunct placement. For short sentences these grammars therefore yield a chart with few edges and little opportunity for parallelism.</Paragraph> <Paragraph position="6"> For longer sentences with several adjuncts, on the other hand, these grammars produce lots of parses and hence offer much greater opportunity for parallelism. Grammar 4 is larger (77 productions) and provides for a wide variety of sentence types (declarative, imperative, wh-question, yesno-question), but also has tighter constraints, including constraints on adjunct placement. The number of edges in the chart and the opportunity for parallelism are therefore fairly large for short sentences, but grow more slowly for longer sentences than with grammars 2 and 3.</Paragraph> <Paragraph position="7"> These differences in grammars are reflected grammar ~2 (small grammar without restrictions) using the fine-grained algorithm for three sentences: a 10 word sentence (curve 1), a 3-word sentence (curve 2) and a 14-word sentence (curve 3). grammar ~3 (small grammar with restrictions) using the fine-grained algorithm for three sentences: a 14-word sentence (curve 1), a 5-word sentence (curve 2), and a 3-word sentence (curve a 15-word sentence (curve 1), a 2-word sentence (curve 2), and a 8-word sentence (curve 3).</Paragraph> <Paragraph position="8"> in the results shown in Figures 4-6. For the small grammar without restrictions (grammar #2), the scheduling overhead for fine-grain parallelism largely defeats the benefits of parallelism, and the overall speed-up is small (Figure 4). For the same grammar with restrictions (grammar #3), the effect of the scheduling overhead is reduced, a.s we explained above. The speed-up is modest for the short sentences, but high (15) for the long sentence with 15 parses (Figure 5). For the question-answering grammar (grammar ~4), the speed-up is fairly consistent for short and long sentences (Figure 6).</Paragraph> </Section> class="xml-element"></Paper>