File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/91/e91-1021_intro.xml
Size: 3,737 bytes
Last Modified: 2025-10-06 14:05:00
<?xml version="1.0" standalone="yes"?> <Paper uid="E91-1021"> <Title>USING PLAUSIBLE INFERENCE RULES IN DESCRIPTION PLANNING</Title> <Section position="3" start_page="0" end_page="0" type="intro"> <SectionTitle> INTRODUCTION </SectionTitle> <Paragraph position="0"> Examples, analogies and class identification are used in many explanations and descriptions. Yet current text generation techniques all fail to tackle the problem of when an example, analogy or class is appropriate, what example, analogy or class is best, and exactly what the user may infer from a given example, analogy or class. McKeown, for example, in her identification schema (given in figure 1) includes the 'rhetorical predicates' identification (as an instance of some class), analogy, particular.illustration and attributive (McKeown, 1985). From each of these, different information could be inferred by the user. In a human explanation they might be used to efficiently convey a great deal of information about the object, or to reinforce some information about an object so it may be better recalled. Yet in McKeown's schema based approach the only mechanism for selecting between these different explanation options is the *This work was carried out while the author was at the department of Artificial Intelligence, University of Edinburgh, funded by a post doctoral fellowship from the Science and Engineering Research Council. Thanks to Ehud Reiter, Paul Brna and to the anonymous reviewers for helpful comments.</Paragraph> <Paragraph position="1"> \[McKeown 851 initial pool of knowledge available to be conveyed, and focus rules, which just enforce some local coherence on the discourse. A particular example or analogy could perhaps be selected using the functions interfacing the rhetorical predicates to the domain knowledge base, but this is not discussed in the theory.</Paragraph> <Paragraph position="2"> More recently, Moore has included examples, analogies etc. in her text planner (Moore, 1990). She includes planning operators to deseribeby-superclass, describe-by-abstraction, describe-byezample, describe-by-analogy and describe-by.partsand.use. Two of these are illustrated in figure 2. But again there are no principled ways of selecting which strategy to use (beyond, for example, possibly selecting an analogy if the analogous concept is known), and the effect of each strategy is th~ same - that the relevant concept is 'known'. In reality, of course, the detailed effects of the different strategies on the hearer'e knowledge will be very different, and will depend on their prior knowl- null ning operators edge. Failing to take this into account results in possible incoherent dialogues which don't address the speaker's real communicative goals.</Paragraph> <Paragraph position="3"> The rest of this paper will present an approach to the problem of selecting between different statement types in a description, based on a set of in' ference rules for guessing what the hearer could infer given a particular statement. These guesses are used to guide the choice of examples, analogies, class identification and attributes given particular goals, and influence how the user model is updated after these kinds of statements are used.</Paragraph> <Paragraph position="4"> The paper first describes the overall framework for explanation generation. This is followed by a brief discussion of the inference rules and knowledge representation used, and a number of examples where the system is used to generate leading descriptions of bicycles. The approach is intended to be complementary to existing approaches which emphasise the coherence of the text, and could reasonable be combined with these.</Paragraph> </Section> class="xml-element"></Paper>