File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/06/e06-1031_metho.xml

Size: 27,467 bytes

Last Modified: 2025-10-06 14:10:04

<?xml version="1.0" standalone="yes"?>
<Paper uid="E06-1031">
  <Title>CDER: Efficient MT Evaluation Using Block Movements</Title>
  <Section position="3" start_page="241" end_page="241" type="metho">
    <SectionTitle>
2 MT Evaluation
</SectionTitle>
    <Paragraph position="0"/>
    <Section position="1" start_page="241" end_page="241" type="sub_section">
      <SectionTitle>
2.1 Block Reordering and State of the Art
</SectionTitle>
      <Paragraph position="0"> In MT - as opposed to other natural language processing tasks like speech recognition - there is usually more than one correct outcome of a task. In many cases, alternative translations of a sentence differ from each other mostly by the ordering of blocks of words. Consequently, an evaluation measure for MT should be able to detect and allow for block reordering. Nevertheless, a higher &amp;quot;amount&amp;quot; of reordering between a candidate translation and a reference translation should still be reflected in a worse evaluation score. In other words, the more blocks there are to be reordered between reference and candidate sentence, the higher we want the measure to evaluate the distance between these sentences.</Paragraph>
      <Paragraph position="1"> State-of-the-art evaluation measures for MT penalize movement of blocks rather severely: n-gram based scores such as BLEU or NIST still yield a high unigram precision if blocks are reordered. For higher-order n-grams, though, the precision drops. As a consequence, this affects the overall score significantly. WER, which is based on Levenshtein distance, penalizes the reordering of blocks even more heavily. It measures the distance by substitution, deletion and insertion operations for each word in a relocated block.</Paragraph>
      <Paragraph position="2"> PER, on the other hand, ignores the ordering of the words in the sentences completely. This often leads to an overly optimistic assessment of translation quality.</Paragraph>
    </Section>
    <Section position="2" start_page="241" end_page="241" type="sub_section">
      <SectionTitle>
2.2 Long Jumps
</SectionTitle>
      <Paragraph position="0"> The approach we pursue in this paper is to extend the Levenshtein distance by an additional operation, namely block movement. The number of blocks in a sentence is equal to the number of gaps among the blocks plus one. Thus, the block movements can equivalently be expressed as long jump operations that jump over the gaps between two blocks. The costs of a long jump are constant. The blocks are read in the order of one of the sentences. These long jumps are combined with the &amp;quot;classical&amp;quot; Levenshtein edit operations, namely insertion, deletion, substitution, and the zero-cost operation identity. The resulting long jump distance dLJ gives the minimum number of operations which are necessary to transform the candidate sentence into the reference sentence. Like the Levenshtein distance, the long jump distance can be depicted using an alignment grid as shown in Figure 1: Here, each grid point corresponds to a pair of inter-word positions in candidate and reference sentence, respectively. dLJ is the minimum cost of a path between the lower left (first) and the upper right (last) alignment grid point which covers all reference and candidate words. Deletions and insertions correspond to horizontal and vertical edges, respectively. Substitutions and identity operations correspond to diagonal edges. Edges between arbitrary grid points from the same row correspond to long jump operations. It is easy to see that dLJ is symmetrical.</Paragraph>
      <Paragraph position="1"> In the example, the best path contains one deletion edge, one substitution edge, and three long jump edges. Therefore, the long jump distance between the sentences is five. In contrast, the best Levenshtein path contains one deletion edge, four identity and five consecutive substitution edges; the Levenshtein distance between the two sentences is six. The effect of reordering on the BLEU measure is even higher in this example: Whereas 8 of the 10 unigrams from the candidate sentence can be found in the reference sentence, this holds for only 4 bigrams, and 1 trigram. Not a single one of the 7 candidate four-grams occurs in the reference sentence.</Paragraph>
    </Section>
  </Section>
  <Section position="4" start_page="241" end_page="245" type="metho">
    <SectionTitle>
3 CDER: A New Evaluation Measure
</SectionTitle>
    <Paragraph position="0"/>
    <Section position="1" start_page="241" end_page="242" type="sub_section">
      <SectionTitle>
3.1 Approach
</SectionTitle>
      <Paragraph position="0"> (Lopresti and Tomkins, 1997) showed that finding an optimal path in a long jump alignment grid is an NP-hard problem. Our experiments showed that the calculation of exact long jump distances becomes impractical for sentences longer than 20 words.</Paragraph>
      <Paragraph position="1">  substitution operations are depicted. Only long jump edges from the best path are drawn. A possible way to achieve polynomial run-time is to restrict the number of admissible block permutations. This has been implemented by (Leusch et al., 2003) in the inversion word error rate. Alternatively, a heuristic or approximative distance can be calculated, as in GTM by (Turian et al., 2003). An implementation of both approaches at the same time can be found in TER by (Snover et al., 2005). In this paper, we will present another approach which has a suitable run-time, while still maintaining completeness of the calculated measure. The idea of the proposed method is to drop some restrictions on the alignment path.</Paragraph>
      <Paragraph position="2"> The long jump distance as well as the Levenshtein distance require both reference and candidate translation to be covered completely and disjointly. When extending the metric by block movements, we drop this constraint for the candidate translation. That is, only the words in the reference sentence have to be covered exactly once, whereas those in the candidate sentence can be covered zero, one, or multiple times. Dropping the constraints makes an efficient computation of the distance possible. We drop the constraints for the candidate sentence and not for the reference sentence because we do not want any information contained in the reference to be omitted. Moreover, the reference translation will not contain unnecessary repetitions of blocks.</Paragraph>
      <Paragraph position="3"> The new measure - which will be called CDER in the following - can thus be seen as a measure oriented towards recall, while measures like BLEU are guided by precision. The CDER is based on the CDCD distance2 introduced in (Lopresti and Tomkins, 1997). The authors show there that the problem of finding the optimal solution can be solved in O(I2 * L) time, where I is the length of the candidate sentence and L the length of the reference sentence. Within this paper, we will refer to this distance as dCD . In the next subsection, we will show how it can be computed in O(I *L) time using a modification of the Levenshtein algorithm.</Paragraph>
      <Paragraph position="4"> We also studied the reverse direction of the described measure; that is, we dropped the coverage constraints for the reference sentence instead of the candidate sentence. Additionally, the maximum of both directions has been considered as distance measure. The results in Section 5.2 will show that the measure using the originally proposed direction has a significantly higher correlation with human evaluation than the other directions.</Paragraph>
    </Section>
    <Section position="2" start_page="242" end_page="243" type="sub_section">
      <SectionTitle>
3.2 Algorithm
</SectionTitle>
      <Paragraph position="0"> Our algorithm for calculating dCD is based on the dynamic programming algorithm for the Levenshtein distance (Levenshtein, 1966). The Levenshtein distance dLev(eI1, ~eL1parenrightbig between two strings eI1 and ~eL1 can be calculated in constant time if the Levenshtein distances of the substrings, dLev(eI[?]11 , ~eL1parenrightbig, dLev(eI1, ~eL[?]11 parenrightbig, and dLev(eI[?]11 , ~eL[?]11 parenrightbig, are known.</Paragraph>
      <Paragraph position="1"> Consequently, an auxiliary quantity</Paragraph>
      <Paragraph position="3"> is stored in an I xL table. This auxiliary quantity can then be calculated recursively from DLev(i [?] 1,l), DLev(i,l [?] 1), and DLev(i [?] 1,l [?] 1).</Paragraph>
      <Paragraph position="4"> Consequently, the Levenshtein distance can be calculated in time O(I *L).</Paragraph>
      <Paragraph position="5"> This algorithm can easily be extended for the calculation of dCD as follows: Again we define an auxiliary quantity D(i,l) as</Paragraph>
      <Paragraph position="7"> Insertions, deletions, and substitutions are handled the same way as in the Levenshtein algorithm. Now assume that an optimal dCD path has been found: Then, each long jump edge within 2C stands for cover and D for disjoint. We adopted this notion for our measures.</Paragraph>
      <Paragraph position="8">  Equation 1 this path will always start at a node with the lowest D value in its row3.</Paragraph>
      <Paragraph position="9"> Consequently, we use the following modification of the Levenshtein recursion:</Paragraph>
      <Paragraph position="11"> where d is the Kronecker delta. Figure 2 shows the possible predecessors of a grid point.</Paragraph>
      <Paragraph position="12"> The calculation of D(i,l) requires all values of D(iprime,l) to be known, even for iprime &gt; i. Thus, the calculation takes three steps for each l:  1. For each i, calculate the minimum of the first three terms.</Paragraph>
      <Paragraph position="13"> 2. Calculate min iprime D(iprime,l).</Paragraph>
      <Paragraph position="14"> 3. For each i, calculate the minimum according  to Equation 1.</Paragraph>
      <Paragraph position="15"> Each of these steps can be done in time O(I). Therefore, this algorithm calculates dCD in time O(I *L) and space O(I).</Paragraph>
    </Section>
    <Section position="3" start_page="243" end_page="243" type="sub_section">
      <SectionTitle>
3.3 Hypothesis Length and Penalties
</SectionTitle>
      <Paragraph position="0"> As the CDER does not penalize candidate translations which are too long, we studied the use of a length penalty or miscoverage penalty. This determines the difference in sentence lengths between candidate and reference. Two definitions of such a penalty have been studied for this work.</Paragraph>
      <Paragraph position="1"> 3Proof: Assume that the long jump edge goes from (iprime,l) to (i,l), and that there exists an iprimeprime such that D(iprimeprime,l) &lt; D(iprime,l). This means that the path from (0,0) to (iprimeprime,l) is less expensive than the path from (0,0) to (iprime,l). Thus, the path from (0,0) through (iprimeprime,l) to (i,l) is less expensive than the path through (iprime,l). This contradicts the assumption.</Paragraph>
      <Paragraph position="2"> Length Difference There is always an optimal dCD alignment path that does not contain any deletion edges, because each deletion can be replaced by a long jump, at the same costs. This is different for a dLJ path, because here each candidate word must be covered exactly once. Assume now that the candidate sentence consists of I words and the reference sentence consists of L words, with I &gt; L.</Paragraph>
      <Paragraph position="3"> Then, at most L candidate words can be covered by substitution or identity edges. Therefore, the remaining candidate words (at least I [?] L) must be covered by deletion edges. This means that at least I[?]L deletion edges will be found in any dLJ path, which leads to dLJ [?] dCD [?] I [?] L in this case.</Paragraph>
      <Paragraph position="4"> Consequently, the length difference between the two sentences gives us a useful miscoverage</Paragraph>
      <Paragraph position="6"> This penalty is independent of the dCD alignment path. Thus, an optimal dCD alignment path is optimal for dCD + lplen as well. Therefore the search algorithm in Section 3.2 will find the optimum for this sum.</Paragraph>
    </Section>
    <Section position="4" start_page="243" end_page="244" type="sub_section">
      <SectionTitle>
Absolute Miscoverage
</SectionTitle>
      <Paragraph position="0"> Let coverage(i) be the number of substitution, identity, and deletion edges that cover a candidate word ei in a dCD path. If we had a complete and disjoint alignment for the candidate word (i.e., a dLJ path), coverage(i) would be 1 for each i.</Paragraph>
      <Paragraph position="1"> In general this is not the case. We can use the absolute miscoverage as a penalty lpmisc for dCD:</Paragraph>
      <Paragraph position="3"> This miscoverage penalty is not independent of the alignment path. Consequently, the proposed search algorithm will not necessarily find an optimal solution for the sum of dCD and lpmisc.</Paragraph>
      <Paragraph position="4"> The idea behind the absolute miscoverage is that one can construct a valid - but not necessarily optimal - dLJ path from a given dCD path. This procedure is illustrated in Figure 3 and takes place in two steps: 1. For each block of over-covered candidate words, replace the aligned substitution and/or identity edges by insertion edges; move the long jump at the beginning of the block accordingly.</Paragraph>
      <Paragraph position="5"> 2. For each block of under-covered candidate words, add the corresponding number of  deletion edges; move the long jump at the beginning of the block accordingly.</Paragraph>
      <Paragraph position="6"> This also shows that there cannot be4 a polynomial time algorithm that calculates the minimum of dCD + lpmisc for arbitrary pairs of sentences, because this minimum is equal to dLJ. With these miscoverage penalties, inexpensive lower and upper bounds for dLJ can be calculated, because the following inequality holds:  (2) dCD + lplen [?] dLJ [?] dCD + lpmisc 4 Word-dependent Substitution Costs</Paragraph>
    </Section>
    <Section position="5" start_page="244" end_page="244" type="sub_section">
      <SectionTitle>
4.1 Idea
</SectionTitle>
      <Paragraph position="0"> All automatic error measures which are based on the edit distance (i.e. WER, PER, and CDER) apply fixed costs for the substitution of words. However, this is counter-intuitive, as replacing a word with another one which has a similar meaning will rarely change the meaning of a sentence significantly. On the other hand, replacing the same word with a completely different one probably will. Therefore, it seems advisable to make substitution costs dependent on the semantical and/or syntactical dissimilarity of the words.</Paragraph>
      <Paragraph position="1"> To avoid awkward case distinctions, we assume  that a substitution cost function cSUB for two words e, ~e meets the following requirements: 1. cSUB depends only on e and ~e.</Paragraph>
      <Paragraph position="2"> 2. cSUB is a metric; especially (a) The costs are zero if e = ~e, and larger than zero otherwise.</Paragraph>
      <Paragraph position="3"> (b) The triangular inequation holds: it is  always cheaper to replace e by ~e than to replace e by eprime and then eprime by ~e.  always equal or lower than those of deleting e and then inserting ~e. In short, cSUB [?] 2. Under these conditions the algorithms for WER and CDER can easily be modified to use word-dependent substitution costs. For example, the only necessary modification in the CDER algorithm in Equation 1 is to replace 1 [?] d(e, ~e) by cSUB(e, ~e).</Paragraph>
      <Paragraph position="4"> For the PER, it is no longer possible to use a linear time algorithm in the general case. Instead, a modification of the Hungarian algorithm (Knuth, 1993) can be used.</Paragraph>
      <Paragraph position="5"> The question is now how to define the word-dependent substitution costs. We have studied two different approaches.</Paragraph>
    </Section>
    <Section position="6" start_page="244" end_page="244" type="sub_section">
      <SectionTitle>
4.2 Character-based Levenshtein Distance
</SectionTitle>
      <Paragraph position="0"> A pragmatic approach is to compare the spelling of the words to be substituted with each other.</Paragraph>
      <Paragraph position="1"> The more similar the spelling is, the more similar we consider the words to be, and the lower we want the substitution costs between them. In English, this works well with similar tenses of the same verb, or with genitives or plurals of the same noun. Nevertheless, a similar spelling is no guarantee for a similar meaning, because prefixes such as &amp;quot;mis-&amp;quot;, &amp;quot;in-&amp;quot;, or &amp;quot;un-&amp;quot; can change the meaning of a word significantly.</Paragraph>
      <Paragraph position="2"> An obvious way of comparing the spelling is the Levenshtein distance. Here, words are compared on character level. To normalize this distance into a range from 0 (for identical words) to 1 (for completely different words), we divide the absolute distance by the length of the Levenshtein alignment path.</Paragraph>
    </Section>
    <Section position="7" start_page="244" end_page="245" type="sub_section">
      <SectionTitle>
4.3 Common Prefix Length
</SectionTitle>
      <Paragraph position="0"> Another character-based substitution cost function we studied is based on the common prefix length of both words. In English, different tenses of the same verb share the same prefix; which is usually the stem. The same holds for different cases, numbers and genders of most nouns and adjectives. However, it does not hold if verb prefixes are changed or removed. On the other hand, the common prefix length is sensitive to critical prefixes such as &amp;quot;mis-&amp;quot; for the same reason. Consequently, the common prefix length, normalized by the average length of both words, gives a reasonable measure for the similarity of two words. To transform the normalized common prefix length into costs, this fraction is then subtracted from 1.</Paragraph>
      <Paragraph position="1"> Table 1 gives an example of these two word-dependent substitution costs.</Paragraph>
    </Section>
    <Section position="8" start_page="245" end_page="245" type="sub_section">
      <SectionTitle>
4.4 Perspectives
</SectionTitle>
      <Paragraph position="0"> More sophisticated methods could be considered for word-dependent substitution costs as well.</Paragraph>
      <Paragraph position="1"> Examples of such methods are the introduction of information weights as in the NIST measure or the comparison of stems or synonyms, as in METEOR (Banerjee and Lavie, 2005).</Paragraph>
    </Section>
  </Section>
  <Section position="5" start_page="245" end_page="247" type="metho">
    <SectionTitle>
5 Experimental Results
</SectionTitle>
    <Paragraph position="0"/>
    <Section position="1" start_page="245" end_page="245" type="sub_section">
      <SectionTitle>
5.1 Experimental Setting
</SectionTitle>
      <Paragraph position="0"> The different evaluation measures were assessed experimentally on data from the Chinese-English and the Arabic-English task of the NIST 2004 evaluation workshop (Przybocki, 2004). In this evaluation campaign, 4460 and 1735 candidate translations, respectively, generated by different research MT systems were evaluated by human judges with regard to fluency and adequacy.</Paragraph>
      <Paragraph position="1"> Four reference translations are provided for each candidate translation. Detailed corpus statistics are listed in Table 2.</Paragraph>
      <Paragraph position="2"> For the experiments in this study, the candidate translations from these tasks were evaluated using different automatic evaluation measures. Pearson's correlation coefficient r between automatic evaluation and the sum of fluency and adequacy was calculated. As it could be arguable whether Pearson's r is meaningful for categorical data like human MT evaluation, we have also calculated Kendall's correlation coefficient t. Because of the high number of samples (= sentences, 4460) versus the low number of categories (= outcomes of adequacy+fluency, 9), we calculated t separately for each source sentence. These experiments showed that Kendall's t reflects the same tendencies as Pearson's r regarding the ranking of the evaluation measures. But only the latter allows for an efficient calculation of confidence intervals. Consequently, figures of t are omitted in this paper.</Paragraph>
      <Paragraph position="3"> Due to the small number of samples for evaluation on system level (10 and 5, respectively), all correlation coefficients between automatic and human evaluation on system level are very close to 1. Therefore, they do not show any significant differences for the different evaluation  measures. Additional experiments on data from the NIST 2002 and 2003 workshops and from the IWSLT 2004 evaluation workshop confirm the findings from the NIST 2004 experiments; for the sake of clarity they are not included here. All correlation coefficients presented here were calculated for sentence level evaluation.</Paragraph>
      <Paragraph position="4"> For comparison with state-of-the-art evaluation measures, we have also calculated the correlation between human evaluation and WER and BLEU, which were both measures of choice in several international MT evaluation campaigns. Furthermore, we included TER (Snover et al., 2005) as a recent heuristic block movement measure in some of our experiments for comparison with our measure. As the BLEU score is unsuitable for sentence level evaluation in its original definition, BLEU-S smoothing as described by (Lin and Och, 2004) is performed. Additionally, we added sentence boundary symbols for BLEU, and a different reference length calculation scheme for TER, because these changes improved the correlation between human evaluation and the two automatic measures. Details on this have been described in (Leusch et al., 2005).</Paragraph>
    </Section>
    <Section position="2" start_page="245" end_page="246" type="sub_section">
      <SectionTitle>
5.2 CDER
</SectionTitle>
      <Paragraph position="0"> Table 3 presents the correlation of BLEU, WER, and CDER with human assessment. It can be seen that CDER shows better correlation than BLEU and WER on both corpora. On the Chinese-English task, the smoothed BLEU score has a higher sentence-level correlation than WER. However, this is not the case for the Arabic- null ation with BLEU, WER, and CDER (NIST 2004 evaluation; sentence level).</Paragraph>
      <Paragraph position="1">  English task. So none of these two measures is superior to the other one, but they are both outperformed by CDER.</Paragraph>
      <Paragraph position="2"> If the direction of CDER is reversed (i.e, the CD constraints are required for the candidate instead of the reference, such that the measure has precision instead of recall characteristics), the correlation with human evaluation is much lower. Additionally we studied the use of the maximum of the distances in both directions. This has a lower correlation than taking the original CDER, as Table 3 shows. Nevertheless, the maximum still performs slightly better than BLEU and WER.</Paragraph>
    </Section>
    <Section position="3" start_page="246" end_page="246" type="sub_section">
      <SectionTitle>
5.3 Hypothesis Length and Penalties
</SectionTitle>
      <Paragraph position="0"> The problem of how to avoid a preference of overly long candidate sentences by CDER remains unsolved, as can be found in Table 4: Each of the proposed penalties infers a significant decrease of correlation between the (extended) CDER and human evaluation. Future research will aim at finding a suitable length penalty. Especially if CDER is applied in system development, such a penalty will be needed, as preliminary optimization experiments have shown.</Paragraph>
    </Section>
    <Section position="4" start_page="246" end_page="246" type="sub_section">
      <SectionTitle>
5.4 Substitution Costs
</SectionTitle>
      <Paragraph position="0"> WER: the correlation with human judgment is increased by about 2% absolute on both language pairs. The Levenshtein-based substitution costs are better suited for WER than the scheme based on common prefix length. For CDER, there is hardly any difference between the two methods.</Paragraph>
      <Paragraph position="1"> Experiments on five more corpora did not give any significant evidence which of the two substitution costs correlates better with human evaluation. But as the prefix-based substitution costs improved correlation more consistently across all corpora, we employed this method in our next experiment.</Paragraph>
    </Section>
    <Section position="5" start_page="246" end_page="247" type="sub_section">
      <SectionTitle>
5.5 Combination of CDER and PER
</SectionTitle>
      <Paragraph position="0"> An interesting topic in MT evaluation research is the question whether a linear combination of two MT evaluation measures can improve the correlation between automatic and human evaluation. Particularly, we expected the combination of CDER and PER to have a significantly higher correlation with human evaluation than the measures alone. CDER (as opposed to PER) has the ability to reward correct local ordering, whereas PER (as opposed to CDER) penalizes overly long candidate sentences. The two measures were combined with linear interpolation. In order to determine the weights, we performed data analysis on seven different corpora. The result was consistent across all different data collections and language pairs: a linear combination of about 60% CDER and 40% PER has a significantly higher correlation with human evaluation than each of the measures alone. For the two corpora studied here, the results of the combination can be found in Table 6: On the Chinese-English task, there is an additional gain of more than 1% absolute in correlation over CDER alone. The combined error measure is the best method in both cases.</Paragraph>
      <Paragraph position="1"> The last line in Table 6 shows the 95%confidence interval for the correlation. We see that the new measure CDER, combined with PER, has a significantly higher correlation with human evaluation than the existing measures BLEU, TER,  and WER on both corpora.</Paragraph>
    </Section>
  </Section>
  <Section position="6" start_page="247" end_page="247" type="metho">
    <SectionTitle>
6 Conclusion and Outlook
</SectionTitle>
    <Paragraph position="0"> We presented CDER, a new automatic evaluation measure for MT, which is based on edit distance extended by block movements. CDER allows for reordering blocks of words at constant cost. Unlike previous block movement measures, CDER can be exactly calculated in quadratic time. Experimental evaluation on two different translation tasks shows a significantly improved correlation with human judgment in comparison with state-of-the-art measures such as BLEU.</Paragraph>
    <Paragraph position="1"> Additionally, we showed how word-dependent substitution costs can be applied to enhance the new error measure as well as existing approaches.</Paragraph>
    <Paragraph position="2"> The highest correlation with human assessment was achieved through linear interpolation of the new CDER with PER.</Paragraph>
    <Paragraph position="3"> Future work will aim at finding a suitable length penalty for CDER. In addition, more sophisticated definitions of the word-dependent substitution costs will be investigated. Furthermore, it will be interesting to see how this new error measure affects system development: We expect it to allow for a better sentence-wise error analysis.</Paragraph>
    <Paragraph position="4"> For system optimization, preliminary experiments have shown the need for a suitable length penalty.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML