File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/06/w06-3111_metho.xml
Size: 14,310 bytes
Last Modified: 2025-10-06 14:10:59
<?xml version="1.0" standalone="yes"?> <Paper uid="W06-3111"> <Title>Partitioning Parallel Documents Using Binary Segmentation</Title> <Section position="3" start_page="78" end_page="80" type="metho"> <SectionTitle> 3 Binary Segmentation Method </SectionTitle> <Paragraph position="0"/> <Section position="1" start_page="78" end_page="78" type="sub_section"> <SectionTitle> 3.1 Approach </SectionTitle> <Paragraph position="0"> Here a document or sentence pair (fJ1 ,eI1) 2 is represented as a matrix. Every element in the matrix contains a lexicon probability p(fj|ei), which is trained on the original parallel corpora. Each position divides a matrix into four parts as shown in Figure 1: the bottom left (C), the upper left (A), the bottom right (D) and the upper right (B). We use m to denote the alignment direction, m = 1 means that the alignment is monotone, i.e. the bottom left part is connected with the upper right part, and m = 0 means the alignment is non-monotone, i.e. the upper left part is connected with the bottom right part, as shown in Figure 1.</Paragraph> </Section> <Section position="2" start_page="78" end_page="79" type="sub_section"> <SectionTitle> 3.2 Log-Linear Model </SectionTitle> <Paragraph position="0"> We use a log-linear interpolation to combine different models: the IBM-1, the inverse IBM-1, the an- null chor words model as well as the IBM-4. K denotes the total number of models.</Paragraph> <Paragraph position="1"> We go through all positions in the bilingual sentences and find the best position for segmenting the sentence:</Paragraph> <Paragraph position="3"> where i [?] [1,I [?]1] and j [?] [1,J [?]1] are positions in the source and target sentences respectively.</Paragraph> <Paragraph position="4"> The feature functions are described in the following sections. In most cases, the sentence pairs are quite long and even after one segmentation we may still have long sub-segments. Therefore, we separate the sub-segment pairs recursively until the length of each new segment is less than a defined value.</Paragraph> </Section> <Section position="3" start_page="79" end_page="79" type="sub_section"> <SectionTitle> 3.3 Normalized IBM-1 </SectionTitle> <Paragraph position="0"> The function in Equation 2 can be normalized by the source sentence length with a weighting b as described in (Xu et al., 2005): The monotone alignment is calculated as</Paragraph> <Paragraph position="2"> and the non-monotone alignment is formulated in the same way.</Paragraph> <Paragraph position="3"> We also use the inverse IBM-1 as a feature, by exchanging the place of ei1 and fj1 its monotone alignment is calculated as:</Paragraph> <Paragraph position="5"/> </Section> <Section position="4" start_page="79" end_page="79" type="sub_section"> <SectionTitle> 3.4 Anchor Words </SectionTitle> <Paragraph position="0"> In the task of extracting parallel sentences from the paragraph-aligned corpus, selecting some anchor words as preferred segmentation positions can effectively avoid the extraction of incomplete segment pairs. Therefore we use an anchor words model to prefer the segmentation at the punctuation marks, where the source and target words are identical:</Paragraph> <Paragraph position="2"> A is a user defined anchor word list, here we use A={.,&quot;?;}. If the corresponding model scaling factor l3 is assigned a high value, the segmentation positions are mostly after anchor words.</Paragraph> </Section> <Section position="5" start_page="79" end_page="79" type="sub_section"> <SectionTitle> 3.5 IBM-4 Word Alignment </SectionTitle> <Paragraph position="0"> If we already have the IBM-4 Viterbi word alignments for the parallel sentences and need to retrain the system, for example to optimize the training parameters, we can include the Viterbi word alignments trained on the original corpora into the binary segmentation. In the monotone case, the model is represented as</Paragraph> <Paragraph position="2"> where N(fj1,ei1) denotes the number of the alignment links inside the matrix (1,1) and (j,i). In the non-monotone case the model is formulated in the same way.</Paragraph> </Section> <Section position="6" start_page="79" end_page="80" type="sub_section"> <SectionTitle> 3.6 Word Alignment Concatenation </SectionTitle> <Paragraph position="0"> As described in Section 2, our translation is based on phrases, that means for an input sentence we extract all phrases matched in the training corpus and translate with these phrase pairs. Although the aim of segmentation is to split parallel text into translated segment pairs, but the segmentation is still not perfect. During sentence segmentation we might separate a phrase into two segments, so that the whole phrase pair can not be extracted.</Paragraph> <Paragraph position="1"> To avoid this, we concatenate the word alignments trained with the segmentations of one sentence pair. During the segmentation, the position of each segmentation point in the sentence is memorized. After training the word alignment model with the segmented sentence pairs, the word alignments are concatenated again according to the positions of their segments in the sentences. The original sentence pairs and the concatenated alignments are then used for the phrase extraction.</Paragraph> </Section> <Section position="7" start_page="80" end_page="80" type="sub_section"> <SectionTitle> 4.1 Bilingual Sentences Extraction Methods </SectionTitle> <Paragraph position="0"> In this section, we describe the different methods to extract the bilingual sentence pairs from the document aligned corpus.</Paragraph> <Paragraph position="1"> Given each document pair, we assume that the paragraphs are aligned one to one monotone if both the source and target language documents contain the same number of paragraphs; otherwise the paragraphs are aligned with the Champollion tool.</Paragraph> <Paragraph position="2"> Starting from the parallel paragraphs we extract the sentences using three methods:</Paragraph> </Section> </Section> <Section position="4" start_page="80" end_page="80" type="metho"> <SectionTitle> 1. Binary segmentation </SectionTitle> <Paragraph position="0"> The segmentation method described in Section 3 is applied by treating the paragraph pairs as long sentence pairs. We can use the anchor words model described in Section 3.4 to prefer splitting at punctuation marks.</Paragraph> <Paragraph position="1"> The lexicon parameters p(f|e) in Equation 2 are estimated as follows: First the sentences are aligned roughly using the dynamic programming algorithm. Training on these aligned sentences, we get the initial lexicon parameters.</Paragraph> <Paragraph position="2"> Then the binary segmentation algorithm is applied to extract the sentences again.</Paragraph> </Section> <Section position="5" start_page="80" end_page="80" type="metho"> <SectionTitle> 2. Champollion </SectionTitle> <Paragraph position="0"> After a paragraph is divided into sentences at punctuation marks, the Champollion tool (Ma, 2006) is used, which applies dynamic programming for the sentence alignment.</Paragraph> </Section> <Section position="6" start_page="80" end_page="81" type="metho"> <SectionTitle> 3. Combination </SectionTitle> <Paragraph position="0"> The bilingual corpora produced by the binary segmentation and Champollion methods are concatenated and are used in the training of the</Paragraph> <Section position="1" start_page="80" end_page="81" type="sub_section"> <SectionTitle> 4.2 Translation Tasks </SectionTitle> <Paragraph position="0"> We will present the translation results on two Chinese-English tasks.</Paragraph> <Paragraph position="1"> 1. On the large data track NIST task (NIST, 2005), we will show improvements using the refined binary segmentation method.</Paragraph> </Section> </Section> <Section position="7" start_page="81" end_page="82" type="metho"> <SectionTitle> 2. On the FBIS corpus, we will compare the dif- </SectionTitle> <Paragraph position="0"> ferent sentence extraction methods described in Section 4.1 with respect to translation performance. We do not apply the extraction methods on the whole NIST corpora, because some corpora provided by the LDC (LDC, 2005) are sentence aligned but not document aligned.</Paragraph> <Section position="1" start_page="81" end_page="81" type="sub_section"> <SectionTitle> 4.3 Corpus Statistics </SectionTitle> <Paragraph position="0"> The training corpora used in NIST task are a set of individual corpora including the FBIS corpus. These corpora are provided by the Linguistic Data Consortium (LDC, 2005), the domains are news articles.</Paragraph> <Paragraph position="1"> The translation experiments are carried out on the NIST 2002 evaluation set.</Paragraph> <Paragraph position="2"> As shown in Table 1, there are 8.6 million sentence pairs in the original corpora of the NIST task. The average sentence length is about 25. After segmentation, there are twice as many sentence pairs, i.e. 17.9 million, and the average sentence length is around 12. Due to a limitation of GIZA++, sentences consisting of more than one hundred words are filtered out. Segmentation of long sentences circumvents this restriction and allows us include more data. Here we were able to add 8% more Chinese and 8.2% more English running words to the training data. The training time is also reduced.</Paragraph> <Paragraph position="3"> Table 2 presents statistics of the FBIS data. After the paragraph alignment described in Section 4.1 we have nearly 81 thousand paragraphs, 8.6 million Chinese and 10.1 million English running words.</Paragraph> <Paragraph position="4"> One of the advantages of the binary segmentation is that we do not loose words during the bilingual sentences extraction. However, we produce sentence pairs with very different lengths. Using Champollion we loose 10.8% of the Chinese and 3.1% of the English words.</Paragraph> </Section> <Section position="2" start_page="81" end_page="81" type="sub_section"> <SectionTitle> 4.4 Segmentation Parameters </SectionTitle> <Paragraph position="0"> We did not optimize the log-linear model scaling factors for the binary segmentation but used the following fixed values: l1 = l2 = 0.5 for the IBM-1 models in both directions; l3 = 108, if the anchor words model is is used; l4 = 30, if the IBM-4 model is used. The maximum sentence length is 25.</Paragraph> </Section> <Section position="3" start_page="81" end_page="82" type="sub_section"> <SectionTitle> 4.5 Evaluation Criteria </SectionTitle> <Paragraph position="0"> We use four different criteria to evaluate the translation results automatically: * WER (word error rate): The WER is computed as the minimum number of substitution, insertion and deletion operations that have to be performed to convert the generated sentence into the reference sentence, divided by the reference sentence length.</Paragraph> <Paragraph position="1"> * PER (position-independent word error rate): A shortcoming of the WER is that it requires a perfect word order. The word order of an acceptable sentence can be differ from that of the target sentence, so that the WER measure alone could be misleading. The PER compares the words in the two sentences ignoring the word order.</Paragraph> <Paragraph position="2"> * BLEU score: This score measures the precision of unigrams, the weight for the binary segmentation a ( weight for Champollion: 1[?]a ) bigrams, trigrams and fourgrams with a penalty for too short sentences. (Papineni et al., 2002). * NIST score: This score is similar to BLEU, but it uses an arithmetic average of N-gram counts rather than a geometric average, and it weights more heavily those N-grams that are more informative. (Doddington, 2002).</Paragraph> <Paragraph position="3"> The BLEU and NIST scores measure accuracy, i.e. larger scores are better. In our evaluation the scores are measured as case insensitive and with respect to multiple references.</Paragraph> </Section> <Section position="4" start_page="82" end_page="82" type="sub_section"> <SectionTitle> 4.6 Translation Results </SectionTitle> <Paragraph position="0"> For the segmentation of long sentences into short segments, we performed the experiments on the NIST task. Both in the baseline and the segmentation systems we obtain 4.7 million bilingual phrases during the translation. The method of alignment concatenation increases the number of the extracted bilingual phrase pairs from 4.7 million to 4.9 million, the BLEU score is improved by 0.1%. By including the IBM-4 Viterbi word alignment, the NIST score is improved. The training of the base-line system requires 5.9 days, after the sentence segmentation it requires only 1.5 days. Moreover, the segmentation allows the inclusion of long sentences that are filtered out in the baseline system. Using the added data, the translation performance is enhanced by 0.3% in the BLEU score. Because of the long translation period, the translation parameters are only optimized on the baseline system with respect to the BLEU score, we could expect a further improvement if the parameters were also optimized on the segmentation system.</Paragraph> <Paragraph position="1"> Our major objective here is to introduce another approach to parallel sentence extraction: binary segmentation of the bilingual texts recursively. We use the paragraph-aligned corpus as a starting point. Table 4 presents the translation results on the training corpora generated by the different methods described in Section 4.1. The translation parameters are optimized with the respect to the BLEU score.</Paragraph> <Paragraph position="2"> We observe that the binary segmentation methods are comparable to Champollion and the segmentation with anchors outperforms the one without anchors. By combining the methods of Champollion and the binary segmentation with anchors, the BLEU score is improved by 0.4% absolutely.</Paragraph> <Paragraph position="3"> We optimized the weightings for the binary segmentation method, the sum of the weightings for both methods is one. As shown in Figure 2, using one of the methods alone does not produce the best result. The maximum BLEU score is attained when both methods are combined with equal weightings.</Paragraph> </Section> </Section> class="xml-element"></Paper>