File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/04/w04-2322_concl.xml

Size: 2,693 bytes

Last Modified: 2025-10-06 13:54:26

<?xml version="1.0" standalone="yes"?>
<Paper uid="W04-2322">
  <Title>A Rule Based Approach to Discourse Parsing</Title>
  <Section position="8" start_page="13" end_page="13" type="concl">
    <SectionTitle>
6 Conclusions and Directions for
</SectionTitle>
    <Paragraph position="0"> future work The U-LDM discussed in this paper represents a significant advance in the theoretical understanding of the nature of discourse structure. The explicit rules for discourse segmentation based on the syntactic reflexes of semantic structures allow analysts for the first time to relate the semantics underlying the syntactic structure of sentences to the discourse segments needed to account for continuity. In order to adapt the rules to other languages which may have different syntactic reflexes of semantic information, understanding the semantic justification for the choice of segments is important. In addition, the rules for discourse attachment for the first time make clear the principles of discourse continuity for &amp;quot;coherent&amp;quot; discourse. In the future, we plan to deepen our understanding of the rules for discourse attachment and, in particular, begin to apply machine learning techniques to increase our understanding of the complex interrelationship that obtain among them.</Paragraph>
    <Paragraph position="1"> While full implementation of the principles of discourse organization outlined here are beyond the state of the art in some respects (i.e.</Paragraph>
    <Paragraph position="2"> determining that a sentence is generic in English is non-trivial in many instances although machine learning techniques might be useful in this regard), we believe that the PALSUMM System demonstrates the practicality of symbolic discourse parsing using the U-LDM Model. The infrastructure for this system has been successfully applied to the task of summarizing documents without a complex semantic component, extensive world knowledge and inference or a subjectively annotated corpus. We believe that the U-LDM parsing methods discussed here can be used for all other complex NLP tasks in which symbolic parsing is appropriate, especially RST trees, our basic algorithm is essentially simpler because RST trees are dependency trees over a large set of different link types, whereas LDM trees are constituent trees over effectively two basic node types: subordinations and non-subordinations.</Paragraph>
    <Paragraph position="3"> those involving high value document collections where precision is critical. In addition, the structures generated through symbolic parsing by the system will be invaluable for training statistical and probabilistic systems.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML