File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/05/w05-0502_concl.xml

Size: 2,812 bytes

Last Modified: 2025-10-06 13:54:55

<?xml version="1.0" standalone="yes"?>
<Paper uid="W05-0502">
  <Title>Simulating Language Change in the Presence of Non-Idealized Syntax</Title>
  <Section position="7" start_page="15" end_page="16" type="concl">
    <SectionTitle>
6 Discussion and conclusion
</SectionTitle>
    <Paragraph position="0"> With reasonable parameter settings, populations in this simulation are able to both gain and lose V2, an improvement over other simulations, including earlier versions of this one, that tend to always converge to SVO+V2+pro-drop. Furthermore, such changes can happen spontaneously, without an externally imposed catastrophe. The simulation does not give reasonable results unless learners can tell which component of a sentence is the topic. Preliminary results suggest that the PARAMETER-CRUCIAL learning algorithm gives more realistic results than the LEARN-ALWAYS algorithm, supporting the hypothesis that much of language acquisition is based on cue sentences that are in some sense unambiguous indicators of the grammar that generates them. Timing properties of the simulation suggest that it takes many generations for a population to effectively forget its original state, suggesting that further research should focus on the simulation's transient behavior rather than on its stationary distribution.</Paragraph>
    <Paragraph position="1">  In future research, this simulation will be extended to include other possible grammars, particularly approximations of Middle English and Icelandic. That should be an appropriate level of detail for studying the loss of V2. For studying the rise of V2, the simulation should also include V1 grammars as in Celtic languages, where the finite verb raises but the topic remains in place. According to Kroch (personal communication) V2 is thought to arise from V1 languages rather than directly from SOV or SVO languages, so the learning algorithm should be tuned so that V1 languages are more likely to become V2 than non-V1 languages.</Paragraph>
    <Paragraph position="2"> The learning algorithms described here do not include any bias in favor of unmarked grammatical features, a property that is thought to be necessary for the acquisition of subset languages. One could easily add such a bias by starting newborns with non-uniform prior information, such as Beta(1,20) for example. It is generally accepted that V2 is marked based on derivational economy.2 Pro-drop is more complicated, as there is no consensus on which setting is marked.3 The correct biases are not obvious, and determining them requires further research. Further extensions will include more complex population structure and literacy, with the goal of eventually comparing the results of the simulation to data from the Pennsylvania Parsed Corpus of Middle English.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML