File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/88/p88-1033_concl.xml
Size: 6,635 bytes
Last Modified: 2025-10-06 13:56:20
<?xml version="1.0" standalone="yes"?> <Paper uid="P88-1033"> <Title>A DEFINITE CLAUSE VERSION OF CATEGORIAL GRAMMAR</Title> <Section position="7" start_page="274" end_page="274" type="concl"> <SectionTitle> 5 'Structural Rules </SectionTitle> <Paragraph position="0"> We now briefly examine the interaction of struc.</Paragraph> <Paragraph position="1"> tural rules with parsing. In intuitionistic sequent systems, structural rules define ways of subtracting, adding, and reordering hypotheses in sequents during proofs. We have the three following structural rules:</Paragraph> <Section position="1" start_page="274" end_page="274" type="sub_section"> <SectionTitle> Hypotheses </SectionTitle> <Paragraph position="0"> All of the structural rules above are implicit in proof rules (I)-(V), and they are all needed to obtain intuitionistic soundness and completeness as in \[7\]. By contrast, Lambek's propositional calculus does not have any of the structural rules; for instance, Interchange is not admitted, since the hypotheses deriving the type of a given string must also account for the positions of the words to which they have been assigned as types, and must obey the strict string adjacency requirement between functions and arguments of classical CG. Thus, Lambek's calculus must assume ordered lists of hypotheses, so as to account for word-order constraints. Under our approach, word-order constraints are obtained declaratively, via sharing of string positions, and there is no strict adjacency requirement. In proof-theoretical terms, this directly translates in viewing programs as unordered sets of hypotheses.</Paragraph> </Section> <Section position="2" start_page="274" end_page="274" type="sub_section"> <SectionTitle> 5.2 Trading Contraction against Decidability </SectionTitle> <Paragraph position="0"> The logic defined by rules (I)-(V) is in general undecidable, but it becomes decidable as soon as Contraction is disallowed. In fact, if a given hypothesis can be used at most once, then clearly the number of internal nodes in a proof tree for a sequent 7 ~ =~ G is at most equal to the total number of occurrences of--*, A and 3 in 7 ~ =~ G, since these are the logical constants for which proof rules with corresponding inference figures have been defined.</Paragraph> <Paragraph position="1"> Hence, no proof tree can contain infinite branches and decidability follows.</Paragraph> <Paragraph position="2"> Now, it seems a plausible conjecture that the programs directly defined by input strings as in Section 4.2 never need Contraction. In fact, each time we use a hypothesis in the proof, either we consume a corresponding word in the input string, or we consume a &quot;virtual&quot; constituent corresponding to a step of hypothesis introduction determined by rule (V) for implications. (Constructions like parasitic gaps can be accounted for by associating specific lexical items with clauses which determine the simultaneous introduction of gaps of the same type.) If this conjecture can be formally confirmed, then we could automate our formalism via a metalnterpreter based on rules (I)-(V), but implemented in such a way that clauses are removed from programs as soon as they are used.</Paragraph> <Paragraph position="3"> Being based on a decidable fragment of logic, such a metainterpreter would not be affected by the kind of infinite loops normally characterizing DCG parsing.</Paragraph> </Section> <Section position="3" start_page="274" end_page="274" type="sub_section"> <SectionTitle> 5.3 Thinning and Vacuous Abstrac- </SectionTitle> <Paragraph position="0"> tion Thinning can cause problems of overgeneratiou, as hypotheses introduced via rule (V) may end up as being never used, since other hypotheses can be used instead. For instance, the type assignment can be used to account for tile well-formedness of both which \[Ishallput a book on r \] and which \[ I shall put : on the table \] but will also accept the ungrammatical which \[ I shall put a bookon the table \] In fact, as we do not have to use all the hypotheses, in this last case the virtual noun-phrase corresponding to the extraction site is added to the program but is never used. Notice that our conjecture in section 4.4.2 was that Contraction is not needed to prove the theorems corresponding to the types of grammatical strings; by contrast, Thinning gives us more theorems than we want. As a consequence, eliminating Thinning would compromise the proof-theoretic properties of (1)-(V) with respect to intuitionistic logic, and the corresponding Kripke models semantics of our programming language.</Paragraph> <Paragraph position="1"> There is however a formally well defined way to account for the ungrammaticaiity of the example above without changing the logical properties of our inference system. We can encode proofs as terms of Lambda Calculus and then filter certain kinds of proof terms. In particular, a hypothesis introduction, determined by rule (V), corresponds to a step of A-abstraction, wllile a hypothesis elimination, determined by one of rules (I)-(II), corresponds to a step of functional application and A-contraction. Hypotheses which are introduced but never eliminated result in corresponding cases of vacuous abstraction. Thus, the three examples above have the three following Lambda encodings of the proof of the sentence for which an extraction site is hypothesized, where the last ungrammatical example corresponds to a case of vacuous abstraction: null Constraints for filtering proof terms characterized by vacuous abstraction can be defined in a straightforward manner, particularly if we are working with a metainterpreter implemented on top of a language based on Lambda terms, such as Lambda-Prolog \[8, 9\]. Beside the desire to maintain certain well defined proof-theoretic and semantic properties of our inference system, there are other reasons for using this strategy instead of disallowing Thinning. Indeed, our target here seems specifically to be the elimination of vacuous Lambda abstraction. Absence of vacuous abstraction has been proposed by Steedman \[17\] as a universal property of human languages. Morrill and Carpenter \[11\] show that other well-formedness constraints formulated in different grammatical theories such as GPSG, LFG and GB reduce to this same property. Moreover, Thinning gives us a straightforward way to account for situations of lexical ambiguity, where the program defined by a certain input string can in fact contain hypotheses which are not needed to derive the type of the string.</Paragraph> </Section> </Section> class="xml-element"></Paper>