File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/ackno/05/p05-1043_ackno.xml
Size: 1,767 bytes
Last Modified: 2025-10-06 13:50:55
<?xml version="1.0" standalone="yes"?> <Paper uid="P05-1043"> <Title>Learning Stochastic OT Grammars: A Bayesian approach using Data Augmentation and Gibbs Sampling</Title> <Section position="9" start_page="352" end_page="352" type="ackno"> <SectionTitle> 9 Future work </SectionTitle> <Paragraph position="0"> This work can be extended in two directions. First, it would be interesting to consider other types of OT grammars, in connection with the linguistics literature. For example, the variances of the normal distribution are fixed in the current paper, but they may also be treated as unknown parameters (Nagy and Reynolds, 1997). Moreover, constraints may be parameterized as mixture distributions, which represent other approaches to using OT for modeling linguistic variation (Anttila, 1997).</Paragraph> <Paragraph position="1"> The second direction is to introduce informative priors motivated by linguistic theories. It is found through experimentation that for more sophisticated grammars, identifiability often becomes an issue: some constraints may have multiple modes in their posterior marginal, and it is difficult to extract modes in high dimensions16. Therefore, use of priors is needed in order to make more reliable inferences. In addition, priors also have a linguistic appeal, since 16Notice that posterior marginals do not provide enough information for modes of the joint distribution.</Paragraph> <Paragraph position="2"> current research on the &quot;initial bias&quot; in language acquisition can be formulated as priors (e.g. Faithfulness Low (Hayes, 2004)) from a Bayesian perspective. null Implementing these extensions will merely involve modifying p(G|Y,D), which we leave for future work.</Paragraph> </Section> class="xml-element"></Paper>