File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/06/p06-2014_metho.xml
Size: 21,546 bytes
Last Modified: 2025-10-06 14:10:22
<?xml version="1.0" standalone="yes"?> <Paper uid="P06-2014"> <Title>Soft Syntactic Constraints for Word Alignment through Discriminative Training</Title> <Section position="4" start_page="0" end_page="105" type="metho"> <SectionTitle> 2 Constrained Alignment </SectionTitle> <Paragraph position="0"> Let an alignment be the complete structure that connects two parallel sentences, and a link be one of the word-to-word connections that make up an alignment. All word alignment methods benefit from some set of constraints. These limit the alignment search space and encourage competition between potential links. The IBM models (Brown et al., 1993) benefit from a one-to-many constraint, where each target word has ex- null actly one generator in the source. Methods like competitive linking (Melamed, 2000) and maximum matching (Taskar et al., 2005) use a one-to-one constraint, where words in either sentence can participate in at most one link. Throughout this paper we assume a one-to-one constraint in addition to any syntax constraints.</Paragraph> <Section position="1" start_page="105" end_page="105" type="sub_section"> <SectionTitle> 2.1 Cohesion Constraint </SectionTitle> <Paragraph position="0"> Suppose we are given a parse tree for one of the two sentences in our sentence pair. We will refer to the parsed language as English, and the unparsed language as Foreign. Given this information, a reasonable expectation is that English phrases will move together when projected onto Foreign. When this occurs, the alignment is said to maintain phrasal cohesion.</Paragraph> <Paragraph position="1"> Fox (2002) measured phrasal cohesion in gold standard alignments by counting crossings. Crossings occur when the projections of two disjoint phrases overlap. For example, Figure 1 shows a head-modifier crossing: the projection of the the tax subtree, imp^ot ... le, is interrupted by the projection of its head, cause. Alignments with no crossings maintain phrasal cohesion. Fox's experiments show that cohesion is generally maintained for French-English, and that dependency trees produce the highest degree of cohesion among the tested structures.</Paragraph> <Paragraph position="2"> Cherry and Lin (2003) use the phrasal cohesion of a dependency tree as a constraint on a beam search aligner. This constraint produces a significant reduction in alignment error rate. However, as Fox (2002) showed, even in a language pair as close as French-English, there are situations where phrasal cohesion should not be maintained. These include incorrect parses, systematic violations such as not -ne ... pas, paraphrases, and linguistic exceptions.</Paragraph> <Paragraph position="3"> We aim to create an alignment system that obeys cohesion constraints most of the time, but can violate them when necessary. Unfortunately, Cherry and Lin's beam search solution does not lend itself to a soft cohesion constraint. The imperfect beam search may not be able to find the optimal alignment under a soft constraint. Furthermore, it is not clear what penalty to assign to crossings, or how to learn such a penalty from an iterative training process. The remainder of this paper will develop a complete alignment search that is aware of cohesion violations, and use discriminative learning technology to assign a meaningful penalty to those violations.</Paragraph> </Section> </Section> <Section position="5" start_page="105" end_page="106" type="metho"> <SectionTitle> 3 Syntax-aware Alignment Search </SectionTitle> <Paragraph position="0"> We require an alignment search that can find the globally best alignment under its current objective function, and can account for phrasal cohesion in this objective. IBM Models 1 and 2, HMM (Vogel et al., 1996), and weighted maximum matching alignment all conduct complete searches, but they would not be amenable to monitoring the syntactic interactions of links. The tree-to-string models of (Yamada and Knight, 2001) naturally consider syntax, but special modeling considerations are needed to allow any deviations from the provided tree (Gildea, 2003). The Inversion Transduction Grammar or ITG formalism, described in (Wu, 1997), is well suited for our purposes. ITGs perform string-to-string alignment, but do so through a parsing algorithm that will allow us to inform the objective function of our dependency tree.</Paragraph> <Section position="1" start_page="105" end_page="106" type="sub_section"> <SectionTitle> 3.1 Inversion Transduction Grammar </SectionTitle> <Paragraph position="0"> An ITG aligns bitext through synchronous parsing. Both sentences are decomposed into constituent phrases simultaneously, producing a word alignment as a byproduct. Viewed generatively, an ITG writes to two streams at once. Terminal productions produce a token in each stream, or a token in one stream with the null symbol[?]in the other.</Paragraph> <Paragraph position="1"> We will use standard ITG notation: A-e/f indicates that the token e is produced on the English stream, whilef is produced on the Foreign stream.</Paragraph> <Paragraph position="2"> To allow for some degree of movement during translation, non-terminal productions are allowed to be either straight or inverted. Straight productions, with their non-terminals inside square brackets [...], produce their symbols in the same order on both streams. Inverted productions, indicated by angled brackets <...> , have their non-terminals produced in the given order on the English stream, but this order is reversed in the Foreign stream.</Paragraph> <Paragraph position="3"> An ITG chart parser provides a polynomial-time algorithm to conduct a complete enumeration of all alignments that are possible according to its grammar. We will use a binary bracketing ITG, the simplest interesting grammar in this formalism: A-[AA] |<AA> |e/f This grammar enforces its own weak cohesion constraint: for every possible alignment, a corresponding binary constituency tree must exist for which the alignment maintains phrasal cohesion.</Paragraph> <Paragraph position="4"> Figure 2 shows a word alignment and the corresponding tree found by an ITG parser. Wu (1997) provides anecdotal evidence that only incorrect alignments are eliminated by ITG constraints. In our French-English data set, an ITG rules out only 0.3% of necessary links beyond those already eliminated by the one-to-one constraint (Cherry and Lin, 2006).</Paragraph> </Section> <Section position="2" start_page="106" end_page="106" type="sub_section"> <SectionTitle> 3.2 Dependency-augmented ITG </SectionTitle> <Paragraph position="0"> An ITG will search all alignments that conform to a possible binary constituency tree. We wish to confine that search to a specific n-array dependency tree. Fortunately, Wu (1997) provides a method to have an ITG respect a known partial structure. One can seed the ITG parse chart so that spans that do not agree with the provided structure are assigned a value of[?][?]before parsing begins.</Paragraph> <Paragraph position="1"> The result is that no constituent is ever constructed with any of these invalid spans.</Paragraph> <Paragraph position="2"> In the case of phrasal cohesion, the invalid spans correspond to spans of the English sentence that interrupt the phrases established by the provided dependency tree. To put this notion formally, we first define some terms: given a subtree T[i,k], whereiis the left index of the leftmost leaf inT[i,k] andk is the right index of its rightmost leaf, we say any indexj[?](i,k) is internal toT[i,k]. Similarly, any index x /[?] [i,k] is external to T[i,k]. An invalid span is any span for which our provided tree</Paragraph> <Paragraph position="4"> has a subtree T[i,k] such that one endpoint of the span is internal to T[i,k] while the other is external to it. Figure 3 illustrates this definition, while Figure 4 shows the invalid spans induced by a simple dependency tree.</Paragraph> <Paragraph position="5"> With these invalid spans in place, the ITG can no longer merge part of a dependency subtree with anything other than another part of the same subtree. Since all ITG movement can be explained by inversions, this constrained ITG cannot interrupt one dependency phrase with part of another. Therefore, the phrasal cohesion of the input dependency tree is maintained. Note that this will not search the exact same alignment space as a cohesion-constrained beam search; instead it uses the union of the cohesion constraint and the weaker ITG constraints (Cherry and Lin, 2006).</Paragraph> <Paragraph position="6"> Transforming this form of the cohesion constraint into a soft constraint is straight-forward. Instead of overriding the parser so it cannot use invalid English spans, we will note the invalid spans and assign the parser a penalty should it use them. The value of this penalty will be determined through discriminative training, as described in Section 4. Since the penalty is available within the dynamic programming algorithm, the parser will be able to incorporate it to find a globally optimal alignment.</Paragraph> </Section> </Section> <Section position="6" start_page="106" end_page="109" type="metho"> <SectionTitle> 4 Discriminative Training </SectionTitle> <Paragraph position="0"> To discriminatively train our alignment systems, we adopt the Support Vector Machine (SVM) for Structured Output (Tsochantaridis et al., 2004). We have selected this system for its high degree of modularity, and because it has an API freely available1. We will summarize the learning mechanism briefly in this section, but readers should refer to (Tsochantaridis et al., 2004) for more details.</Paragraph> <Paragraph position="1"> SVM learning is most easily expressed as a constrained numerical optimization problem. All constraints mentioned in this section are constraints on this optimizer, and have nothing to do with the cohesion constraint from Section 2.</Paragraph> <Section position="1" start_page="107" end_page="108" type="sub_section"> <SectionTitle> 4.1 SVM for Structured Output </SectionTitle> <Paragraph position="0"> Traditional SVMs attempt to find a linear separator that creates the largest possible margin between two classes of vectors. Structured output SVMs attempt to separate the correct structure from all incorrect structures by the largest possible margin, for all training instances. This may sound like a much more difficult problem, but with a few assumptions in place, the task begins to look very similar to a traditional SVM.</Paragraph> <Paragraph position="1"> As in most discriminative training methods, we begin by assuming that a candidate structure y, built for an input instancex, can be adequately described using a feature vector Ps(x,y). We also assume that our Ps(x,y) decomposes in such a way that the features can guide a search to recover the structure y from x. That is: struct(x; vectorw) = argmaxy[?]Y<vectorw,Ps(x,y)> (1) is computable, where Y is the set of all possible structures, and vectorw is a vector that assigns weights to each component of Ps(x,y). vectorw is the parameter vector we will learn using our SVM.</Paragraph> <Paragraph position="2"> Now the learning task begins to look straightforward: we are working with vectors, and the task of building a structure y has been recast as an argmax operator. Our learning goal is to find a vectorw so that the correct structure is found:</Paragraph> <Paragraph position="4"> where xi is the ith training example, yi is its correct structure, and Psi(y) is short-hand for Ps(xi,y). As several vectorw will fulfill (2) in a linearly separable training set, the unique max-margin objective is defined to be the vectorw that maximizes the minimum distance between yi and the incorrect structures inY.</Paragraph> <Paragraph position="5"> This learning framework also incorporates a notion of structured loss. In a standard vector classification problem, there is 0-1 loss: a vector is either classified correctly or it is not. In the structured case, some incorrect structures can be better than others. For example, having the argmax select an alignment missing only one link is better than selecting one with no correct links and a dozen wrong ones. A loss function [?](yi,y) quantifies just how incorrect a particular structure y is. Though Tsochantaridis et al. (2004) provide several ways to incorporate loss into the SVM objective, we will use margin re-scaling, as it corresponds to loss usage in another max-margin alignment approach (Taskar et al., 2005). In margin re-scaling, high loss structures must be separated from the correct structure by a larger margin than low loss structures.</Paragraph> <Paragraph position="6"> To allow some misclassifications during training, a soft-margin requirement replaces our max-margin objective. A slack variablexi is introduced for each training example xi, to allow the learner to violate the margin at a penalty. The magnitude of this penalty to determined by a hand-tuned parameter C. After a few transformations (Tsochantaridis et al., 2004), the soft-margin learning objective can be formulated as a quadratic program:</Paragraph> <Paragraph position="8"> Note how the slack variables xi allow some incorrect structures to be built. Also note that the loss [?](yi,y) determines the size of the margin between structures.</Paragraph> <Paragraph position="9"> Unfortunately, (4) provides one constraint for every possible structure for every training example. Enumerating these constraints explicitly is infeasible, but in reality, only a subset of these constraints are necessary to achieve the same objective. Re-organizing (4) produces:</Paragraph> <Paragraph position="11"> where costi is defined as:</Paragraph> <Paragraph position="13"> Provided that the max cost structure can be found in polynomial time, we have all the components needed for a constraint generation approach to this optimization problem.</Paragraph> <Paragraph position="14"> Constraint generation places an outer loop around an optimizer that minimizes (3) repeatedly for a growing set of constraints. It begins by minimizing (3) with an empty constraint set in place of (4). This provides values for vectorw and vectorx. The max cost structure</Paragraph> <Paragraph position="16"> is found for i = 1 with the current vectorw. If the resulting costi(-y; vectorw) is greater than the current value of xi, then this represents a violated constraint2 in our complete objective, and a new constraint of the form xi [?] costi(-y; vectorw) is added to the constraint set. The algorithm then iterates: the optimizer minimizes (3) again with the new constraint set, and solves the max cost problem for i = i+ 1 with the new vectorw, growing the constraint set if necessary. Note that the constraints on x change with vectorw, as cost is a function of vectorw. Once the end of the training set is reached, the learner loops back to the beginning. Learning ends when the entire training set can be processed without needing to add any constraints. It can be shown that this will occur within a polynomial number of iterations (Tsochantaridis et al., 2004).</Paragraph> <Paragraph position="17"> With this framework in place, one need only fill in the details to create an SVM for a new structured output space: 1. A Ps(x,y) function to transform instancestructure pairs into feature vectors 2. A search to find the best structure given a weight vector: argmaxy<vectorw,Ps(x,y)> . This has no role in training, but it is necessary to use the learned weights.</Paragraph> <Paragraph position="18"> 3. A structured loss function [?](y,-y) 4. A search to find the max cost structure: argmaxycosti(y;w)</Paragraph> </Section> <Section position="2" start_page="108" end_page="109" type="sub_section"> <SectionTitle> 4.2 SVMs for Alignment </SectionTitle> <Paragraph position="0"> Using the Structured SVM API, we have created two SVM word aligners: a baseline that uses weighted maximum matching for its argmax operator, and a dependency-augmented ITG that will 2Generally the test to see if xi > costi(-y; vectorw) is approximated as xi > costi(-y; vectorw) + epsilon1 for a small constant epsilon1. satisfy our requirements for an aligner with a soft cohesion constraint. Our x becomes a bilingual sentence-pair, while our y becomes an alignment, represented by a set of links.</Paragraph> <Paragraph position="1"> Given a bipartite graph with edge values, the weighted maximum matching algorithm (West, 2001) will find the matching with maximum summed edge values. To create a matching alignment solution, we reproduce the approach of (Taskar et al., 2005) within the framework described in Section 4.1: 1. We define a feature vector ps for each potential link l in x, and Ps in terms of y's component links: Ps(x,y) = summationtextl[?]yps(l). 2. Our structure search is the matching algorithm. The input bipartite graph has an edge for each l. Each edge is given the value v(l)-<vectorw,ps(l)> .</Paragraph> <Paragraph position="2"> 3. We adopt the weighted Hamming loss in described (Taskar et al., 2005): [?](y,-y) = co|y[?]-y|+cc|-y[?]y| where co is an omission penalty and cc is a commission penalty.</Paragraph> <Paragraph position="3"> 4. Our max cost search corresponds to their loss-augmented matching problem. The input graph is modified to prefer costly links: [?]l /[?]y : v(l)-<vectorw,ps(l)> +cc [?]l[?]y : v(l)-<vectorw,ps(l)> [?]co Note that our max cost search could not have been implemented as loss-augmented matching had we selected one of the other loss objectives presented in (Tsochantaridis et al., 2004) in place of margin rescaling.</Paragraph> <Paragraph position="4"> We use the same feature representation ps(l) as (Taskar et al., 2005), with some small exceptions. Let l = (Ej,Fk) be a potential link between the jth word of English sentence E and the kth word of Foreign sentenceF. To measure correlation between Ej and Fk we use conditional link probability (Cherry and Lin, 2003) in place of the Dice coefficient:</Paragraph> <Paragraph position="6"> where the link counts are determined by wordaligning 50K sentence pairs with another matching SVM that uses the ph2 measure (Gale and Church, 1991) in place of Dice. The ph2 measure requires only co-occurrence counts. d is an absolute discount parameter as in (Moore, 2005). Also, we omit the IBM Model 4 Prediction features, as we wish to know how well we can do without resorting to traditional word alignment techniques. Otherwise, the features remain the same, including distance features that measure</Paragraph> <Paragraph position="8"> frequencies; common-word features; a bias term set always to 1; and an HMM approximation cor(Ej+1,Fk+1).</Paragraph> <Paragraph position="9"> Because of the modularity of the structured output SVM, our SVM ITG re-uses a large amount infrastructure from the matching solution. We essentially plug an ITG parser in the place of the matching algorithm, and add features to take advantage of information made available by the parser. x remains a sentence pair, and y becomes an ITG parse tree that decomposes x and specifies an alignment. Our required components are as follows: 1. We define a feature vector psT on instances of production rules, r. Ps is a function of the decomposition specified by y: Ps(x,y) =summationtext r[?]ypsT(r).</Paragraph> <Paragraph position="10"> 2. The structure search is a weighted ITG parser that maximizes summed production scores.</Paragraph> <Paragraph position="11"> Each instance of a production rule r is assigned a score of<vectorw,psT(r)> 3. Loss is unchanged, defined in terms of the alignment induced by y.</Paragraph> <Paragraph position="12"> 4. A loss-augmented ITG is used to find the max cost. Productions of the form A - e/f that correspond to links have their scores augmented as in the matching system.</Paragraph> <Paragraph position="13"> The psT vector has two new features in addition to those present in the matching system's ps. These features can be active only for non-terminal productions, which have the formA-[AA]|<AA> . One feature indicates an inverted production A<AA> , while the other indicates the use of an invalid span according to a provided English dependency tree, as described in Section 3.2. These are the only features that can be active for non-terminal productions.</Paragraph> <Paragraph position="14"> A terminal production rl that corresponds to a link l is given that link's features from the matching system: psT(rl) = ps(l). Terminal productions r[?] corresponding to unaligned tokens are given blank feature vectors: psT(r[?]) =vector0.</Paragraph> <Paragraph position="15"> The SVM requires complete Ps vectors for the correct training structures. Unfortunately, our training set contains gold standard alignments, not ITG parse trees. The gold standard is divided into sure and possible link sets S and P (Och and Ney, 2003). Links in S must be included in a correct alignment, while P links are optional. We create ITG trees from the gold standard using the following sorted priorities during tree construction: This creates trees that represent high scoring alignments, using a minimal number of invalid spans. Only the span and inversion counts of these trees will be used in training, so we need not achieve a perfect tree structure. We still evaluate all methods with the original alignment gold standard.</Paragraph> </Section> </Section> class="xml-element"></Paper>