File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/06/w06-3409_metho.xml
Size: 11,604 bytes
Last Modified: 2025-10-06 14:11:00
<?xml version="1.0" standalone="yes"?> <Paper uid="W06-3409"> <Title>Pragmatic Discourse Representation Theory</Title> <Section position="3" start_page="0" end_page="60" type="metho"> <SectionTitle> 2 A More Pragmatic DRT </SectionTitle> <Paragraph position="0"> This section presents a more pragmatic DRT focusing on the relationship between speaker generation and the linguistic content, and between the linguistic content and hearer recognition. Figure 1 represents the link between our representation of the speaker's cognitive state, the speaker's linguistic content and the hearer's cognitive state or DRS (Discourse Representation Structure). This relationship has not to our knowledge been explored in the literature and deserves investigation.</Paragraph> <Paragraph position="1"> Generally speaking, to generate an utterance, there would be some discrepancy between the speaker's beliefs and the speaker's beliefs about the hearer's beliefs. The discrepancy leads to an utterance, i.e. linguistic content. The linguistic content is the window the hearer has onto the speaker's state of mind. It is what influences hearer recognition.</Paragraph> <Paragraph position="2"> By analysis of the linguistic content provided by the speaker, the hearer can propose a hypothesis regarding the speaker's state of mind.</Paragraph> <Section position="1" start_page="58" end_page="59" type="sub_section"> <SectionTitle> 2.1 New DR-Structures </SectionTitle> <Paragraph position="0"> The DRT representation introduced here extends standard DRT language and structure resulting in a suitable pragmatic-based framework for representing this pragmatic link. Separate DRSs are created to represent each agent. DRSs get updated with each new utterance. Each DRS representing an agent's cognitive state includes the two personal reference markers 'i' and 'you'. When 'i' is used in a DRS, it refers to the agent's self within that DRS; i.e. if the agent is the speaker, then 'i' refers to the speaker in the entire DRS. To refer to the other agent, 'you' is used. To follow from the speaker's example, 'you' in this case refers to the hearer. To account for agents' cognitive states and their meta-beliefs, a sub-DRS representing the agent's cognitive state called the belief DRS is created to include the speaker's beliefs about the hearer's beliefs. Additionally, a new DRS for representing weaker beliefs called acceptance is introduced. The same level of embedding offered to belief DRSs is introduced in acceptance DRSs.</Paragraph> <Paragraph position="1"> Acceptance DRS includes the speaker's acceptance DRS as well as what the speaker takes the hearer to accept. Provided the speaker has sufficient information, the speaker can also have the embedded DRS within the acceptance DRS that represents what the hearer takes the speaker to accept.</Paragraph> <Paragraph position="2"> In addition to expanding the belief DRS, each agent's cognitive state contains an intention DRS. Intention in the sense used here refers to the agent's goals in making an utterance, which are represented by the corresponding dialogue act marked in the intention DRS. The hearer's intention DRS represents the recognized utterance and contains elements of utterance-making generally associated with pragmatics such as the function of an utterance, its dialogue act. This pragmatic enriching strengthens the link between an agent's intentions and the linguistic form uttered. What is proposed is that the intention DRS be designed to include the linguistic content provided within utterances.</Paragraph> <Paragraph position="3"> To further enhance the link between agents' cognitive states and the linguistic content of their utterances, the intention DRS contains the rich pragmatic information offered by explicitly marking the presupposition (given information) and the assertion (new information) of the current utterance. The intention DRS is a separate DRS from the belief DRS.</Paragraph> <Paragraph position="4"> The beliefs of an agent give the motivation for making an utterance, and the intention DRS represents the speaker's intended message. The recognition of an utterance gives the hearer an insight into the agent's beliefs. Depending upon the particular dialogue represented, the intention DRS could have the speaker's intention, the hearer's intentions or both.</Paragraph> <Paragraph position="5"> The intention DRS functions as the immediate context, the one containing the utterance being generated or recognized. The belief and acceptance DRSs function as background context containing information pertaining to the dialogue and not just the current utterance. This division of labour context-wise is useful in that the information represented in the intention DRS directly feeds into the speaker's utterance, and is then inferred by the hearer through the linguistic content. The hearer's intention DRS includes the inferred speaker intentions in uttering the current utterance. This gives the flexibility of being able to model information that the hearer has inferred but has not yet decided to accept or believe and is, therefore, not yet included in either the belief or acceptance DRS. For instance, while the hearer in example (1) has recognized S1's utterance, he has not yet accepted S1's utterance. This motivates separating the representation of beliefs from intentions. (1) S1: Bob's trophy wife is cheating on him.</Paragraph> <Paragraph position="6"> H1: When did Bob get married?</Paragraph> </Section> <Section position="2" start_page="59" end_page="60" type="sub_section"> <SectionTitle> 2.2 Extending DRT Language </SectionTitle> <Paragraph position="0"> In addition to the three DRSs introduced above, in order to make the link between speaker generation, linguistic content, and hearer recognition more explicit, labels, 'labeln', n an integer, are introduced.</Paragraph> <Paragraph position="1"> The labels mark the distinction between presupposition and assertion, and the distinction between weak and strong beliefs. Furthermore, the labels can be used to refer to a particular predicate by another complex predicate. The labels increase the expressive power from an essentially first-order formalism to a higher-order formalism. Presuppositions are marked by a presupposition label 'pn'. Similarly, DRSs inside the main speaker or hearer DRS are labeled 'drsn'. Assertions are marked by 'an' to strengthen the connections between the linguistic form (in the separation between presupposition and assertion) and the representation of beliefs. Believed information labeled 'bn' inside a belief DRS or accepted information labeled 'cn' inside an acceptance DRS can be either presupposed or asserted inside the intention DRS. Thus, the labels in the intention DRS can only be 'p' or 'a'.</Paragraph> <Paragraph position="2"> Conditions referring to attitudes (acceptance, beliefs, and intentions) have been added to the extended semantics of DRT. Figure 2 shows three embedded DRSs, acceptance DRS, drs2, belief DRS, drs4, and intention DRS, drs6 representing: (2) A: Tom is buying Mary a puppy.</Paragraph> <Paragraph position="3"> B: That's sweet.</Paragraph> <Paragraph position="4"> DRSs are referred to by the attitude describing them.</Paragraph> <Paragraph position="5"> For example, attitude(i,'BEL', drs4) refers to the DRS containing the speaker's beliefs, using the label for the belief DRS, drs4. Other conditions are allowed to employ 'i' as an argument. Attitude(i,'accept', drs2) refers to the DRS containing the speaker's acceptance DRS, using the label for the acceptance DRS, drs2. Attitude(i,'INT', drs6) refers to the DRS containing the speaker's intention in uttering example (2), using the label for the intention DRS, drs6. The speaker's acceptance DRS contains an embedded DRS for the hearer's acceptance DRS, drs2. In this case, it is empty, as no weakly believed propositions have been introduced yet. Similarly, the belief DRS contains space for the speaker's beliefs about the hearer's beliefs, drs5. The intention DRS contains the linguistic content of the utterance that the speaker is about to make, drs6, as well as the relevant dialogue acts.</Paragraph> <Paragraph position="6"> In Figure 2, there are essentially three levels of embedding in a main DRS. If we look at the belief DRS, the first embedded DRS is the agent's own belief DRS. Level two is the agent's beliefs about the other agent's beliefs DRS. Level three is inserted when necessary and represents the agent's beliefs about the other agent's beliefs about the agent's beliefs DRS. DRSs of the same level of embedding have similar status. For example, the agent's acceptance and belief DRSs have equal status. However, the only discourse referents in common are the ones in the main DRS's universe. Each equal-level embedding has its own set of discourse referents, as well as its own conditions.</Paragraph> <Paragraph position="7"> Discourse referents of same and higher levels of embedding are accessible to lower levels of embedding and are therefore not represented in the lower level embedding universe. This does not entail that when a lower level embedding makes use of a discourse referent introduced in a higher level embedding the agent and other agent share the same internal or external anchors. For example, when talking about a rabbit, the speaker's representation of rabbit will be: b1:rabbit(x), whereas the speaker's representation of the hearer's beliefs will be b2:rabbit(x). This is to replace Kamp and Reyle's (1993) use of different discourse referents, where a new discourse referent is used every time the same object or individual is referred to in a new sentence (e.g. rabbit(x), then rabbit(y)). The aim is to avoid having to overtly use the x=y rule every time the same rabbit is referred to. The principles behind the equation predicate are still in place; i.e. every time rabbit is referred to, it is bound to the rabbit already in the context. However, we bind it to the previous properties of rabbit already in context through attaching it to the same discourse referent, rabbit(x).</Paragraph> <Paragraph position="8"> Both Kamp and Reyle's and our representation face revision when it transpires that the agents in dialogue have different referents in mind. For example, both the speaker and hearer might be talking about 'rabbit'. However, they might have a different 'rabbit' in mind, and assume the other participant is thinking of the rabbit they have in mind. The speaker might have a grey rabbit in mind, whereas the hearer has a white rabbit in mind. In this case, Kamp and Reyle's revision would consist of deleting x=y predicate, and any previous equation predicate that may have been introduced each time rabbit was referred to. In our representation, the revision takes place by changing the other agent's discourse referent, b2:rabbit(x) becomes label2:rabbit(y).</Paragraph> <Paragraph position="9"> Furthermore, the previous pragmatic extensions to standard DRT have been implemented computationally to approximate a computational model of communication and to enable us to see whether the extended DRT works logically. The implementation relates the linguistic content of utterances to the beliefs and intentions of the agents. The implementation operates with a specific dialogue, which can be modified, within a restricted domain. It seems reasonable to conclude on the basis of the implementation that the conceptual and formal proposals made provide a basis for further development.</Paragraph> </Section> </Section> class="xml-element"></Paper>