File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/98/p98-1035_intro.xml

Size: 1,749 bytes

Last Modified: 2025-10-06 14:06:35

<?xml version="1.0" standalone="yes"?>
<Paper uid="P98-1035">
  <Title>Exploiting Syntactic Structure for Language Modeling</Title>
  <Section position="2" start_page="0" end_page="0" type="intro">
    <SectionTitle>
1 Introduction
</SectionTitle>
    <Paragraph position="0"> The main goal of the present work is to develop a language model that uses syntactic structure to model long-distance dependencies. During the summer96 DoD Workshop a similar attempt was made by the dependency modeling group. The model we present is closely related to the one investigated in (Chelba et al., 1997), however different in a few important aspects: * our model operates in a left-to-right manner, allowing the decoding of word lattices, as opposed to the one referred to previously, where only whole sentences could be processed, thus reducing its applicability to n-best list re-scoring; the syntactic structure is developed as a model component; * our model is a factored version of the one in (Chelba et al., 1997), thus enabling the calculation of the joint probability of words and parse structure; this was not possible in the previous case due to the huge computational complexity of the model.</Paragraph>
    <Paragraph position="1"> Our model develops syntactic structure incrementally while traversing the sentence from left to right. This is the main difference between our approach and other approaches to statistical natural language parsing. Our parsing strategy is similar to the incremental syntax ones proposed relatively recently in the linguistic community (Philips, 1996). The probabilistic model, its parameterization and a few experiments that are meant to evaluate its potential for speech recognition are presented.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML