File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/91/p91-1022_metho.xml

Size: 17,092 bytes

Last Modified: 2025-10-06 14:12:49

<?xml version="1.0" standalone="yes"?>
<Paper uid="P91-1022">
  <Title>ALIGNING SENTENCES IN PARALLEL CORPORA</Title>
  <Section position="4" start_page="0" end_page="169" type="metho">
    <SectionTitle>
TIIE HANSARD CORPORA
</SectionTitle>
    <Paragraph position="0"> Brown el al., \[Brown et al., 1990\] describe the process by which the proceedings of the Ca.nadian Parliament are recorded. In Canada, these proceedings are re\[erred to as tta.nsards.</Paragraph>
    <Paragraph position="1">  Our Hansard corpora consist of the llansards from 1973 through 1986. There are two files for each session of parliament: one English and one French. After converting the obscure text markup language of the raw data. to TEX , we combined all of the English files into a single, large English corpus and all of the French files into a single, large French corpus. We then segmented the text of each corpus into tokens and combined the tokens into groups that we call sentences. Generally, these conform to the grade-school notion of a sentence: they begin with a capital letter, contain a.</Paragraph>
    <Paragraph position="2"> verb, and end with some type of sentence-final punctuation. Occasionally, they fall short of this ideal and so each corpus contains a number of sentence fragments and other groupings of words that we nonetheless refer to as sentences. With this broad interpretation, the English corpus contains 85,016,286 tokens in 3,510,744 sentences, and the French corpus contains 97,857,452 tokens in 3,690,425 sentences. The average English sentence has 24.2 tokens, while the average French sentence is about 9.5% longer with 26.5 tokens.</Paragraph>
    <Paragraph position="3"> The left-hand side of Figure 1 shows the raw data for a portion of the English corpus, and the right-hand side shows the same portion after we converted it to TEX and divided it up into sentences. The sentence numbers do not advance regularly because we have edited the sample in order to display a variety of phenolnena. null In addition to a verbatim record of the proceedings and its translation, the ttansards include session numbers, names of speakers, time stamps, question numbers, and indications of the original language in which each speech was delivered. We retain this auxiliary information in the form of comments sprinkled throughout the text. Each comment has the form \SCM{} ... \ECM{} as shown on the right-hand side of Figure 1. \]n addition to these comments, which encode information explicitly present in the data, we inserted Paragraph comments as suggested by the space command of which we see aa example in the eighth line on the left-hand side of Figure 1.</Paragraph>
    <Paragraph position="4"> We mark the beginning of a parliamentary session with a Document comment as shown in Sentence 1 on the right-hand side of Figure 1. Usually, when a member addresses the parliament, his name is recorded and we encode it in an Author comment. We see an example of this in Sentence 4. If the president speaks, he is referred to in the English corpus as Mr. Speaker and in the French corpus as M. le Prdsideut. If several members speak at once, a shockingly regular occurrence, they are referred to as Some Hon. Members in the English and as Des Voix in the French. Times are recorded either ~ exact times on a. 24-hour basis as in $entencc 8\], or as inexact times of which there are two forms: Time = Later, and Time = Recess. These are rendered in French as Time = Plus Tard and Time = Recess. Other types of comments that appear are shown in Table 1.</Paragraph>
  </Section>
  <Section position="5" start_page="169" end_page="171" type="metho">
    <SectionTitle>
ALIGNING ANCHOR POINTS
</SectionTitle>
    <Paragraph position="0"> After examining the Hansard corpora, we realized that the comments laced throughout would serve as uscflll anchor points in any alignment process. We divide the comments into major and minor anchors as follows. The comments Author = Mr. Speaker, Author = ill. le Pr(sident, Author = Some Hon. Members, and Author = Des Voix are called minor anchors. All other comments are called major anchors with the exception of the Paragraph comment which is not treated as an anchor at all. The minor anchors are much more common than any particular major anchor, making an alignment based on them less robust against deletions than one based on the major anchors. Therefore, we have carried out the alignment of anchor points in two passes, first aligning the major anchors and then the minor anchors.</Paragraph>
    <Paragraph position="1"> Usually, the major anchors appear in both languages. Sometimes, however, through inattentlon on the part of the translator or other misa.dvel~ture, the tla.me of a speaker may be garbled or a comment may be omitted. In the first alignment pass, we assign to alignments</Paragraph>
    <Paragraph position="3"> *boMr. Donald MacInnis (Cape Breton -East Richmond):*ro Mr. Speaker, I rise on a question of privilege affecting the rights and prerogatives of parliamentary committees and one which reflects on the word of two ministers.</Paragraph>
    <Paragraph position="4"> .SP *boMr. Speaker: *roThe hon. member's motion is proposed to the House under the terms of Standing Order 43. Is there unanimous consent? .SP *boSome hon. Members: *roAgreed. s*itText*ro) Question No. 17--*boMr. Mazankowski: *to I. For the period April I, 1973 to January 31, 1974, what amount of money was expended on the operation and maintenance of the Prime  *boMr. Cossitt:*ro Mr. Speaker, I rise on a point of order to ask for clarification by the parliamentary secretary.</Paragraph>
    <Paragraph position="5"> 1. \SCM{} Document = 048 101 H002-108 script A \ECM{) 2. The House met at 2 p.m.</Paragraph>
    <Paragraph position="6"> 3. \SCM{} Paragraph \ECM{} 4. \SCM{} Author = Mr. Donald MacInnis (Cape Breton-East Richmond) \ECM{} 5. Mr. Speaker, I rise on a question of privilege affecting the rights and prerogatives of parliamentary committees and one which reflects on the word of two ministers.</Paragraph>
    <Paragraph position="7"> 21. \SCM{} Paragraph \ECM{} 22. \SCM{} Author = Mr. Speaker \ECM{} 23. The hon. member's motion is proposed to the House under the terms of Standing Order 43.</Paragraph>
    <Paragraph position="8"> 44. Is there unanimous consent? 45. \SCM{} Paragraph \ECM{) 46. \SCM{-} Author = Some hon. Members \ECM{} 47. Agreed.</Paragraph>
    <Paragraph position="9"> 61. \SCM{} Source = Text \ECM{} 62. \SCM{} Question = 17 \ECM{} 63. \SCM{} Author = Mr. Mazankowski \ECMO 64. I.</Paragraph>
    <Paragraph position="10"> 65. For the period April I, 1973 to January 31, 1974, .hat amount of money was expended on the operation and maintenance of the Prime Minister's residence at Harrington Lake, Quebec? 66. \SCM{} Paragraph \ECN{} 81. \SCM{) Time = (1415) \ECM{} 82. \SCM{) Time = Later \ECM{) 83. \SCM{} Paragraph \ECM{} 84. \SCM{} Author = Mr. Cossitt \ECM{} 85. Mr. Speaker, I rise on a point of  a cost that favors exact matches and penalizes omissions or garbled matches. Thus, for example, we assign a cost of 0 to the pair Time = Later and Time = Plus Tard, but a cost of 10 to the pair Time = Later and Author = Mr. Bateman. We set the cost of a deletion at 5. For two names, we set the cost by counting the number of insertions, deletions, and substitutions necessary to transform one name, letter by letter, into the other. This value is then reduced to the range 0 to 10. Given the costs described above, it is a standard problem in dynamic programming to find that alignment of the major anchors in the two corpora with the least total cost \[Bellman, 1957\]. In theory, the time and space required to find this alignment grow as the product of the lengths of the two sequences to be aligned. In practice, however, by using thresholds and the partial traceback technique described by Brown, Spohrer, Hochschild, and Baker , \[Brown et al., 1982\], the time required can be made linear in the length of the sequences, and the space can be made constant. Even so, the computational demand is severe since, in places, the two corpora are out of alignment by as many as 90,000 sentences owing to mislabelled or missing files. This first pass renders the data as a sequence of sections between aligned major anchors. In the second pass, we accept or reject each section in turn according to the population of minor anchors that it contains. Specifically, we accept a section provided that, within the section, both corpora contain the same number of minor anchors in the same order. Otherwise, we reject the section. Altogether, we reject about 10% of the data in each corpus. The minor anchors serve to divide the remaining sections into subsections thai. range in size from one sentence to several thousand sentences and average about ten sentences.</Paragraph>
  </Section>
  <Section position="6" start_page="171" end_page="174" type="metho">
    <SectionTitle>
ALIGNING SENTENCES AND
PARAGRAPH BOUNDARIES
</SectionTitle>
    <Paragraph position="0"> We turn now to the question of aligning the individual sentences in a subsection between minor anchors. Since the number of  sentences in the French corpus differs from the number in the English corpus, it is clear that they cannot be in one-to-one correspondence throughout. Visual inspection of the two corpora quickly reveals that although roughly 90% of the English sentences correspond to single French sentences, there are many instances where a single sentence in one corpus is represented by two consecutive sentences in the other. Rarer, but still present, are examples of sentences that appear in one corpus but leave no trace in the other. If one is moderately well acquainted with both English and French, it is a simple matter to decide how the sentences should be aligned. Unfortunately, the sizes of our corpora make it impractical for us to obtain a complete set of alignments by hand. Rather, we must necessarily employ some automatic scheme.</Paragraph>
    <Paragraph position="1"> It is not surprising and further inspection verifies that tile number of tokens in sentences that are translations of one another are correlated. We looked, therefore, at the possibility of obtaining alignments solely on the basis of sentence lengths in tokens. From this point of view, each corl)us is simply a sequence of sentence lengths punctuated by occasional paragraph markers. Figure 2 shows the initial portion of such a pair of corpora. We have circled groups of sentence lengths to show the correct alignment. We call each of the groupings a bead. In this example, we have an el-bead followed by an eft-bead followed by an e-bead followed by a P~Pl-bead. An alignment, then, is simply a sequence of beads that accounts for the observed sequences of sentence lengths and paragraph markers. We imagine the sentences in a subsection to have been generated by a pa.ir of random processes, the first pro- null ducing a sequence of beads and the second choosing the lengths of the sentences in each bead.</Paragraph>
    <Paragraph position="2"> Figure 3 shows the two-state Markov model that we use for generating beads. -We assume that a single sentence in one language lines up with zero, one, or two sentences in the other and that paragraph markers may be deleted.</Paragraph>
    <Paragraph position="3"> Thus, we allow any of the eight beads shown in  Given a bead, we determine the lengths of the sentences it contains as follows. We a.ssume the probability of an English sentence of length g~ given an e-bead to be the same as the probability of an English sentence of length ee in the text as a whole. We denote this probability by Pr(ee). Similarly, we assume the probability of a French sentence of length g! given an f-bead to be Pr (gY)&amp;quot; For an el-bead, we assume that the English sentence has length e, with probability Pr (~e) and that log of the ratio of length of the French sentence to the length of the English sentence is uormMly distributed with mean /t and variance a 2. Thus, if r = log(gt/ge), we assume that er(ts\[e, ) = c exp\[-(r- (1) with 0C/ chosen so that the sum of Pr(tllt, ) over positive values of gI is equal to unity. For an eel-bead, we assume that each of the English sentences is drawn independently from the distribution Pr(t.) and that the log of the ratio of the length of the French sentence to the sum of the lengths of the English sentences is normally distributed with the same mean and variance as for an el-bead. Finally, for an eft-bead, we assume that the length of the English sentence is drawn from the distribution Pr (g,) and that the log of the ratio of the sum of the lengths of the French sentences to the length of the English sentence is normally distributed asbefore. Then, given the sum of the lengths of the French sentences, we assume that tile probability of a particular pair of lengths,/~11 and ~12, is proportional to</Paragraph>
    <Paragraph position="5"> Together, these two random processes form a hidden Markov model \[Baum, 1972\] for the generation of aligned pairs of corpora.. We determined the distributions, Pr (g,) and Pr (aS), front the relative frequencies of various sentence lengths in our data. Figure 4 shows for each language a. histogram of these for sentences with fewer than 81 tokens. Except for lengths 2 and 4, which include a large number of formulaic sentences in both the French and the English, the distributions are very smooth.</Paragraph>
    <Paragraph position="6"> For short sentences, the relative frequency is a reliable estimate of the corresponding probability since for both French and English we have more than 100 sentences of each length less tha.n 8\]. We estimated the probabilities  of greater lengths by fitting the observed frequencies of longer sentences to the tail of a Poisson distribution.</Paragraph>
    <Paragraph position="7"> We determined M1 of the other parameters by applying the EM algorithm to a large sampie of text \[Baum, 1972, Dempster et al., 1977\]. The resulting values are shown in Table 3.</Paragraph>
    <Paragraph position="8"> From these parameters, we can see that 91% of the English sentences and 98% of the English paragraph markers line up one-to-one with their French counterparts. A random variable z, the log of which is normMly distributed with mean # and variance o ~, has mean value exp(/t + a2/2). We can also see, therefore, that the total length of the French text in an el-, eel-, or eft-bead should be about 9.8% greater on average than the total length of the corresponding English text. Since most sentences belong to el-beads, this is close to the value of 9.5% given in Section 2 for the amount by which the length of the average French sentences exceeds that of the average English sentence.</Paragraph>
    <Paragraph position="9"> We can compute from the parameters in Table 3 that the entropy of the bead production process is 1.26 bits per sentence. Using the parameters # and (r 2, we can combine the observed distribution of English sentence lengths shown in Figure 4 with the conditional distribution of French sentence lengths given English sentence lengths in Equation (1) to obtain the joint distribution of French and English sentences lengths in el-, eel-, and eftbeads. From this joint distribution, we can compute that the mutual information between French and English sentence lengths in these beads is 1.85 bits per sentence. We see therefore that, even in the absence of the anchor points produced by the first two pa.sses, the correla.tion in sentence lengths is strong enough to allow alignment with an error rate that is asymptotically less than 100%. lh;artening though such a result may be to the theoretician, this is a sufficiently coarse bound on the error rate to warrant further study. Accordingly, we wrote a program to Simulate the alignment process that we had in mind. Using Pr(eC/), Pr((C/), and the parameters from Ta- null ble 3, we generated an artificial pair of aligned corpora. We then determined the most probable alignment for the data. We :recorded the fraction of el-beads in the most probable alignment that did not correspond to el-beads in the true Mignment as the error rate for the process. We repeated this process many thousands of times and found that we could expect an error rate of about 0.9% given the frequency of anchor points from the first two pa,sses.</Paragraph>
    <Paragraph position="10"> By varying the parameters of the hidden Markov model, we explored the effect of anchor points and paragraph ma.rkers on the accuracy of alignment. We found that with paragraph markers but no ~tnchor points, we could expect an error rate of 2.0%, with anchor points but no l)~tra.graph markers, we could expect an error rate of 2.3%, and with neither anchor points nor pa.ragraph markers, we could expect an error rate of 3.2%. Thus, while anchor points and paragraph markers are important, alignment is still feasible without them. This is promising since it suggests that one may be able to apply the same technique to data where frequent anchor points are not available. null</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML