File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/92/p92-1006_metho.xml

Size: 22,176 bytes

Last Modified: 2025-10-06 14:13:12

<?xml version="1.0" standalone="yes"?>
<Paper uid="P92-1006">
  <Title>Parsing*</Title>
  <Section position="3" start_page="0" end_page="40" type="metho">
    <SectionTitle>
2. Probabilistic Models in &amp;quot;Picky
</SectionTitle>
    <Paragraph position="0"> The probabilistic models used ill the implementation of &amp;quot;Picky are independent of the algorithm. To facilita.te the comparison between the performance of &amp;quot;Picky and its predecessor, &amp;quot;Pearl, the probabilistic model ilnplelnented for &amp;quot;Picky is similar to &amp;quot;Pearl's scoring nlodel, the contextl'pearl =-- probabilistic Earley-style parser (~-Earl). &amp;quot;Picky =probabilistic CI(Y-like parser ('P-CKY).</Paragraph>
    <Paragraph position="1">  used in &amp;quot;,Picky and the T'icky algorithn,, are similar in content to the corresponding sections of Magernmn and Weir\[13\]. The experimental results and discussions which follow in sections .1-6 ~tre original.</Paragraph>
    <Paragraph position="2">  free grammar with context-sensitive probability (CFG with CSP) model. This probabilistic model estimates the probability of each parse T given the words in the sentence S, P(TIS), by assuming that each non-terminal and its immediate children are dependent on the nonterminal's siblings and parent and on the part-of-speech trigram centered at the beginning of that rule:</Paragraph>
    <Paragraph position="4"> where C is the non-terminal node which immediately dominates A, al is the part-of-speech associated with the leftmost word of constituent A, and a0 and a2 are the parts-of-speech of the words to the left and to the right of al, respectively. See Magerman and Marcus 1991 \[12\] for a more detailed description of the CFG with CSP model.</Paragraph>
  </Section>
  <Section position="4" start_page="40" end_page="42" type="metho">
    <SectionTitle>
3. The Parsing Algorithm
</SectionTitle>
    <Paragraph position="0"> A probabilistic language model, such as the aforementioned CFG with CSP model, provides a metric for evaluating the likelihood of a parse tree. However, while it may suggest a method for evaluating partial parse trees, a language model alone does not dictate the search strategy for determining the most likely analysis of an input.</Paragraph>
    <Paragraph position="1"> Since exhaustive search of the space of parse trees produced by a natural language grammar is generally not feasible, a parsing model can best take advantage of a probabilistic language model by incorporating it into a parser which probabilistically models the parsing process. &amp;quot;Picky attempts to model the chart parsing process for context-free grammars using probabilistic prediction.</Paragraph>
    <Paragraph position="2"> Picky parses sentences in three phases: covered left-corner phase (I), covered bidirectional phase (II), and tree completion phase (III). Each phase uses a different method for proposing edges to be introduced to the parse chart. The first phase, covered left-corner, uses probabilistic prediction based on the left-corner word of the left-most daughter of a. constituent to propose edges.</Paragraph>
    <Paragraph position="3"> The covered bidirectional phase also uses probabilistic prediction, but it allows prediction to occur from the left-corner word of any daughter of a constituent, and parses that constituent outward (bidirectionally) from that daughter. These phases are referred to as &amp;quot;covered&amp;quot; because, during these phases, the parsing mechanism proposes only edges that have non-zero probability according to the prediction model, i.e. tha.t have been covered by the training process. The final phase, tree completion, is essentially an exhaustive search of all interpretations of the input, according to the gra.mn\]a.r.</Paragraph>
    <Paragraph position="4"> However, the search proceeds in best-first order, according to the measures provided by the language model.</Paragraph>
    <Paragraph position="5"> This phase is used only when the probabilistic prediction model fails to propose the edges necessary to complete a parse of the sentence.</Paragraph>
    <Paragraph position="6"> The following sections will present and motivate the prediction techniques used by the algorithm, and will then describe how they are implemented in each phase.</Paragraph>
    <Section position="1" start_page="40" end_page="41" type="sub_section">
      <SectionTitle>
3.1. Probabilistic Prediction
</SectionTitle>
      <Paragraph position="0"> Probabilistie prediction is a general method for using probabilistic information extracted from a parsed corpus to estimate the likelihood that predicting an edge at a certain point in the chart will lead to a correct analysis of the sentence. The Picky algorithm is not dependent on the specific probabilistic prediction model used. The model used in the implementation, which is similar to the probabilistic language model, will be described. 4 The prediction model used in the implementation of Picky estimates the probability that an edge proposed at a point in the chart will lead to a correct parse to be:</Paragraph>
      <Paragraph position="2"> where ax is the part-of-speech of the left-corner word of B, a0 is the part-of-speech of the word to the left of al, and a~ is the part-of-speech of the word to the right of al.</Paragraph>
      <Paragraph position="3"> To illustrate how this model is used, consider the sentence null The cow raced past the barn. (3) The word &amp;quot;cow&amp;quot; in the word sequence &amp;quot;the cow raced&amp;quot; predicts NP --+ det n, but not NP --4 det n PP, since PP is unlikely to generate a verb, based on training material, s Assuming the prediction model is well trained, it will propose the interpretation of &amp;quot;raced&amp;quot; as the beginning of a participial phrase modifying &amp;quot;the cow,&amp;quot; as in The cow raced past the barn mooed. (4) However, the interpretation of &amp;quot;raced&amp;quot; as a past participle will receive a low probability estimate relative to the verb interpretation, since the prediction naodel only considers local context.</Paragraph>
      <Paragraph position="4">  the language model used to evaluate complete analyses. However, it is helpful if this is the ca.se, so that the probability estimates of incomplete edges will be consistent with the probability estimates of completed constituents.</Paragraph>
      <Paragraph position="5"> SThroughout this discussion, we will describe the prediction process using wo,-ds as the predictors of edges. In the implementation, due to sparse data concerns, only parts-of-speech are used to predict edges. Give,, more robust estimation techniques, a probabilistic prediction model conditioned on word sequences is likely to perform as well or better.</Paragraph>
      <Paragraph position="6">  The process of probabilistic prediction is analogous to that of a human parser recognizing predictive lexical items or sequences in a sentence and using these hints to restrict the search for the correct analysis of the sentence. For instance, a sentence beginning with a wh-word and auxiliary inversion is very likely to be a question, and trying to interpret it as an assertion is wasteful. If a verb is generally ditransitive, one should look for two objects to that verb instead of one or none. Using probabilistic prediction, sentences whose interpretations are highly predictable based on the trained parsing model can be analyzed with little wasted effort, generating sometimes no more than ten spurious constituents for sentences which contain between 30 and 40 constituents! Also, in some of these cases every predicted rule results in a completed constituent, indicating that the model made no incorrect predictions and was led astray only by genuine ambiguities in parts of the sentence.</Paragraph>
    </Section>
    <Section position="2" start_page="41" end_page="41" type="sub_section">
      <SectionTitle>
3.2. Exhaustive Prediction
</SectionTitle>
      <Paragraph position="0"> When probabilistic prediction fails to generate the edges necessary to complete a parse of the sentence, exhaustive prediction uses the edges which have been generated in earlier phases to predict new edges which might combine with them to produce a complete parse. Exhaustive prediction is a combination of two existing types of prediction, &amp;quot;over-the-top&amp;quot; prediction \[11\] and top-down filtering.</Paragraph>
      <Paragraph position="1"> Over-the-top prediction is applied to complete edges. A completed edge A -+ a will predict all edges of the form B -+ flAT. 6 Top-down filtering is used to predict edges in order to complete incomplete edges. An edge of the form A --4 aBoBxB2fl, where a B1 has been recognized, will predict edges of the form B0 + 3' before B1 and edges of the form B2 --4 ~ after B1.</Paragraph>
    </Section>
    <Section position="3" start_page="41" end_page="42" type="sub_section">
      <SectionTitle>
3.3. Bidirectional Parsing
</SectionTitle>
      <Paragraph position="0"> The only difference between phases I and II is that phase II allows bidirectional parsing. Bidirectional parsing is a technique for initiating the parsing of a constituent from any point in that constituent. Chart parsing algorithms generally process constituents from left-to-right.</Paragraph>
      <Paragraph position="1"> For instance, given a grammar rule A -+ B1B2..-B,, (5) 6In the implementation of &amp;quot;Picky, over-the-top prediction fi)r A --+ o' will only predict edges of the form B -+ A~'. This liJnitaticm on over-the-top precliction is due to the expensive bookl~eeping involved in bidirectional parsing. See the section on bidirectional parsing for more details.</Paragraph>
      <Paragraph position="2"> a parser generally would attempt to recognize a B1, then search for a B2 following it, and so on. Bidirectional parsing recognizes an A by looking for any Bi. Once a Bi has been parsed, a bidirectional parser looks for a /3/-1 to the left of the Bi, a Bi+I to the right, and so on.</Paragraph>
      <Paragraph position="3"> Bidirectional parsing is generally an inefficient technique, since it allows duplicate edges to be introduced into the chart. As an example, consider a context-free rule NP -+ DET N, and assume that there is a determiner followed by a noun in the sentence being parsed. Using bidirectional parsing, this NP rule can be predicted both by the determiner and by the noun. The edge predicted by the determiner will look to the right for a noun, find one, and introduce a new edge consisting of a completed NP. The edge predicted by the noun will look to the left for a determiner, find one, and also introduce a new edge consisting of a completed NP. Both of these NPs represent identical parse trees, and are thus redundant. If the algorithm permits both edges to be inserted into the chart, then an edge XP --+ ~ NP/3 will be advanced by both NPs, creating two copies of every XP edge. These duplicate XP edges can themselves be used in other rules, and so on.</Paragraph>
      <Paragraph position="4"> To avoid this propagation of redundant edges, the parser must ensure that no duplicate edges are introduced into the chart. 79icky does this simply by verifying every time an edge is added that the edge is not already in the chart. Although eliminating redundant edges prevents excessive inefficiency, bidirectional parsing may still perform more work than traditional left-to-right parsing. In the previous example, three edges are introduced into the chart to parse the NP -+ DET N edge. A left-to-right parser would only introduce two edges, one when the determiner is recognized, and another when the noun is recognized.</Paragraph>
      <Paragraph position="5"> The benefit of bidirectional parsing can be seen when probabilistic prediction is introduced into the parser.</Paragraph>
      <Paragraph position="6"> Freqneatly, the syntactic structure of a constituent is not determined by its left-corner word. For instance, in the sequence V NP PP, the prepositional phrase PP can modify either the noun phrase NP or the entire verb phrase V NP. These two interpretations require different VP rules to be predicted, but the decision about which rule to use depends on more than just the verb. The correct rule may best be predicted by knowing the preposition used in the PP. Using probabilistic prediction, the decision is made by pursuing the rule which has the highest probability according to the prediction model. This rule is then parsed bidirectionally. If this rule is in fact the correct rule to analyze the constituent, then no other  predictions will be made for that constituent, and there will be no more edges produced than in left-to-right parsing. Thus, the only case where bidirectional Parsing is less efficient than left-to-right parsing is when the prediction model fails to capture the elements of context of the sentence which determine its correct interpretation.</Paragraph>
    </Section>
    <Section position="4" start_page="42" end_page="42" type="sub_section">
      <SectionTitle>
3.4. The Three Phases of 7~icky
</SectionTitle>
      <Paragraph position="0"> Covered Left-Corner The first phase uses probabilistic prediction based on the part-of-speech sequences from the input sentence to predict all grammar rules which have a non-zero probability of being dominated by that trigram (based on the training corpus), i.e.</Paragraph>
      <Paragraph position="2"> where al is the part-of-speech of the left-corner word of B. In this phase, the only exception to the probabilistic prediction is that any rule which can immediately dominate the preterminal category of any word in the sentence is also predicted, regardless of its probability.</Paragraph>
      <Paragraph position="3"> This type of prediction is referred to as exhaustive prediction. All of the predicted rules are processed using a standard best-first agenda processing algorithm, where the highest scoring edge in the chart is advanced.</Paragraph>
      <Paragraph position="4"> Covered Bidirectional If an S spanning the entire word string is not recognized by the end of the first phase, the covered bidirectional phase continues the parsing process. Using the chart generated by the first phase, rules are predicted not only by the trigram centered at the left-corner word of the rule, but by the trigram centered at the left-corner word of any of the children of that rule, i.e.</Paragraph>
      <Paragraph position="6"> where bl is the part-of-speech associated with the left-most word of constituent B. This phase introduces incomplete theories into the chart which need to be expanded to the left and to the right, as described in the bidirectional parsing section above.</Paragraph>
      <Paragraph position="7"> Tree Completion If the bidirectional processing fails to produce a successful parse, then it is assumed that there is some part of the input sentence which is not covered well by the training material. In the final phase, exhaustive prediction is performed on all complete theories which were introduced in the previous phases but which are not predicted by the trigrams beneath t.heln (i.e. V(rule \] trigram) = 0).</Paragraph>
      <Paragraph position="8"> In this phase, edges ~tre only predicted by their left-corner word. As mentioned previously, bidirect.ional parsing can be inefficient when the prediction model is inaccurate. Since all edges which the pledictioa model assigns non-zero probability have already been predicted, the model can no longer provide any information for future predictions. Thus, bidirectional parsing in this phase is very likely to be inefficient. Edges already in the chart will be parsed bidirectionally, since they were predicted by the model, but all new edges will be predicted by the left-corner word only.</Paragraph>
      <Paragraph position="9"> Since it is already known that the prediction model will assign a zero probability to these rules, these predictions are instead scored based on the number of words spanned by the subtree which predicted them. Thus, this phase processes longer theories by introducing rules which can advance them. Each new theory which is proposed by the parsing process is exhaustively predicted for, using the length-based scoring model.</Paragraph>
      <Paragraph position="10"> The final phase is used only when a sentence is so far outside of the scope of the training material that none of the previous phases are able to process it. This phase of the algorithm exhibits the worst-case exponential behavior that is found in chart parsers which do not use node packing. Since the probabilistic model is no longer useful in this phase, the parser is forced to propose an enormous number of theories. The expectation (or hope) is that one of the theories which spans most of the sentence will be completed by this final process. Depending on the size of the grammar used, it may be unfeasible to allow the parser to exhaust all possible predicts before deciding an input is ungrammatical. The question of when the parser should give up is an empiricM issue which will not be explored here.</Paragraph>
      <Paragraph position="11"> Post-processing: Partial Parsing Once the final phase has exhausted all predictions made by the grammar, or more likely, once the probability of all edges in the chart falls below a certain threshold, Picky determines the sentence to be ungrammatical. However, since the chart produced by 7)icky contains all recognized constituents, sorted by probability, the chart can be used to extract partial parses. As implemented, T'icky prints out the most probable completed S constituent.</Paragraph>
    </Section>
  </Section>
  <Section position="5" start_page="42" end_page="43" type="metho">
    <SectionTitle>
4. Why a New Algorithm?
</SectionTitle>
    <Paragraph position="0"> Previous research efforts have produced a wide variety of parsing algorithms for probabilistic and nonprobabilistie grammars. One might question the need for a. new algorithm to deal with context-sensitive probabilistic models. However, these previous efforts have generally failed to address both efficiency and robusthess effe(:ti rely.</Paragraph>
    <Paragraph position="1"> For noll-probabilistic grammar models, tile CKY algorithm \[9\] \[17\] provides efficiency and robustness in polynomia.1 time, O(6'n3). C,I(Y can be modified to ha.n null dle simple P-CFGs \[2\] without loss of efficiency. However, with the introduction of context-sensitive probability models, such as the history-based grammar\[l\] and the CFG with CSP models\[12\], CKY cannot be modified to accommodate these models without exhibiting exponential behavior in the grammar size G. The linear behavior of CKY with respect to grammar size is dependent upon being able to collapse the distinctions among constituents of the same type which span the same part of the sentence. However, when using a context-sensitive probabilistic model, these distinctions are necessary. For instance, in the CFG with CSP model, the part-of-speech sequence generated by a constituent affects the probability of constituents that dominate it. Thus, two constituents which generate different part-of-speech sequences must be considered individually and cannot be collapsed.</Paragraph>
    <Paragraph position="2"> Earley's algorithm \[6\] is even more attractive than CKY in terms of efficiency, but it suffers from the same exponential behavior when applied to context-sensitive probabilistic models. Still, Earley-style prediction improves the average case performance of en exponential chart-parsing algorithm by reducing the size of the search space, as was shown in \[12\]. However, Earley-style prediction has serious impacts on robust processing of ungrammatical sentences. Once a sentence has been determined to be ungrammatical, Earley-style prediction prevents any new edges from being added to the parse chart. This behavior seriously degrades the robustness of a natural language system using this type of parser.</Paragraph>
    <Paragraph position="3"> A few recent works on probabilistic parsing have proposed algorithms and devices for efficient, robust chart parsing. Bobrow\[3\] and Chitrao\[4\] introduce agenda-based probabilistic parsing algorithms, although neither describe their algorithms in detail. Both algorithms use a strictly best first search. As both Chitrao and Magerman\[12\] observe, a best first search penalizes longer and more complex constituents (i.e. constituents which are composed of more edges), resulting in thrashing and loss of efficiency. Chitrao proposes a heuristic penalty based on constituent length to deal with this problem. Magerman avoids thrashing by calculating the score of a parse tree using the geometric mean of the probabilities of the constituents contained in the tree.</Paragraph>
    <Paragraph position="4"> Moore\[14\] discusses techniques for improving the efficiency and robustness of chart parsers for unification grammars, but the ideas are applicable to probabilistic grammars as well. Some of the techniques proposed are well-known ideas, such as compiling e-t, ra.nsitions (null gaps) out of the grammar and heuristically controlling the introduction of predictions.</Paragraph>
    <Paragraph position="5"> The Picky parser incorporates what we deem to be the most effective techniques of these previous works into one parsing algorithm. New techniques, such as probabilistic prediction and the multi-phase approach, are introduced where the literature does not provide adequate solutions. Picky combines the standard chart parsing data structures with existing bottom-up and top-down parsing operations, and includes a probabilistic version of top-down filtering and over-the-top prediction. Picky also incorporates a limited form of bi-directional parsing in a way which avoids its computationally expensive side-effects. It uses an agenda processing control mechanism with the scoring heuristics of Pearl.</Paragraph>
    <Paragraph position="6"> With the exception of probabilistic prediction, most of the ideas in this work individually are not original to the parsing technology literature. However, the combination of these ideas provides robustness without sacrificing efficiency, and efficiency without losing accuracy.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML