File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/92/c92-3127_concl.xml

Size: 3,568 bytes

Last Modified: 2025-10-06 13:56:50

<?xml version="1.0" standalone="yes"?>
<Paper uid="C92-3127">
  <Title>A HYBRID SYSTEM FOR QUANTIFIER SCOPING</Title>
  <Section position="4" start_page="0" end_page="0" type="concl">
    <SectionTitle>
3. Conclusions
</SectionTitle>
    <Paragraph position="0"> As noted, semantic and pragmatic factors have deliberately been unaddressed. But a few words are in order on their eventual incorporation. There are of a number of issues that always arise where semantic processing is concerned: compositionality, knowledge representation, etc.</Paragraph>
    <Paragraph position="1"> But what I want to address is an issue peculiar to ACRES DE COIANG-92, NANIES, 23-28 AO~rI 1992 8 6 3 PRO(:. oV COLING-92, NANTES. AUG. 23-28, 1992 the current system: namely, should semantic (read: semantic/pragmatic) factors be incorporated by hybridization or integration? That is, should leX.sy n be replaced by flex-syn-sem-prag, i.e. a nctaon mat consiaers all relevant tactors before making any scope judgment? Or should flex-syn be integrated with semantic specialists? There are problems with either alternative.</Paragraph>
    <Paragraph position="2"> The problem with full hybridization is that the database would have to be remade from scratch, since the value flex.s~nosemy~prag(blah) is not a function of flex syn(btah) lnat is, flex s-n sere * - . . .~y prag(blah) is not the result of combmmg flexsyn(blah) with other judgments based on blah: that would be a mixed IS/hybrid model, the second alternative. As noted in 2.2, new syntactic or lexical factors cannot be added to PSM in a controlled way. The same is true for any new factors. My goal in this paper has been to show that syntactic and lexical factors are well-behaved enough that non-modularity restricted to these factors is a burden which however is bearable, and worth bearing. But if all factors including infinite complex meanings are hybridized, the problems become intractable. It would be perhaps impossible to determhae even a large portion of the function flex-syn-sem-prag. And even if it were troy excrnciatmg out not impossible, the effort would have to be largely duplicated whenever the data was extended. It's not for nothing that modularity is a hallmark of good design. (Note also, incidentally, that scoping would have to entirely follow translation, unlike Figure I.) As a working hypothesis I have adopted the second alternative. Yet the argument of section I, extended to semantic factors, suggests that if the system is to capture the complex and subtle variations in human scope judgments, these factors should be not integrated but hybridized.</Paragraph>
    <Paragraph position="3"> To back away from this because it makes the engineering too hard may be understandable, but we should not forget the joke about the guy looking for lost keys where he knows they aren't because the light is better there. Modularity may be imperative for approaching complex problems, but there is no a priori reason why the mind must be modular. Indeed Fodor (1983) has speculated that much of it may not be, and hence he is pessimistic about cognitive science.</Paragraph>
    <Paragraph position="4"> Obviously this is a deep issue, and I do not claim to have resolved it (for more, see Chien 1992). Nor am I saying either that in computational linguistics we should model human minds or that we should just design practical systems. I am suggesting that these goals ultimately may be incompatible - not because minds are too imprecise (e.g. Glymour 1987), but because they are too precise.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML