File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/98/p98-2191_intro.xml

Size: 1,756 bytes

Last Modified: 2025-10-06 14:06:41

<?xml version="1.0" standalone="yes"?>
<Paper uid="P98-2191">
  <Title>Maximum Entropy Model Learning of the Translation Rules</Title>
  <Section position="3" start_page="0" end_page="0" type="intro">
    <SectionTitle>
1 Introduction
</SectionTitle>
    <Paragraph position="0"> A statistical natural language modeling can be viewed as estimating a combinational distribution X x Y -+ \[0, 1\] using training data (xl, yl&gt;,..., &lt;XT, YT&gt; 6 X. x Y observed in corpora. For this topic, Baum (1972) proposed EM algorithm, which was basis of Forward-Backward algorithm for the hidden Markov model (HMM) and Inside-Outside algorithm (Lafferty, 1993) for the pr0babilistic context free grammar (PCFG). However, these methods have problems such as increasing optimization costs which is due to a lot of parameters. Therefore, estimating a natural language model based on the maximum entropy (ME) method (Pietra et al., 1995; Berger et al., 1996) has been highlighted recently.</Paragraph>
    <Paragraph position="1"> On the other hand, dictionaries for multi-lingual natural language processing such as the machine translation has been made by human hand usually. However, since this work requires a great deal of labor and it is difficult to keep description of dictionaries consistent, the researches of automatical dictionaries making for machine translation (translation rules) from corpora become active recently (Kay and RSschesen, 1993; Kaji and Aizono, 1996).</Paragraph>
    <Paragraph position="2"> In this paper, we notice that estimating a language model based on ME method is suitable for learning the translation rules, and propose several methods to resolve problems in adapting ME method to learning the translation rules.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML