File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/00/w00-1012_metho.xml
Size: 32,823 bytes
Last Modified: 2025-10-06 14:07:24
<?xml version="1.0" standalone="yes"?> <Paper uid="W00-1012"> <Title>Dialogue Management in the Agreement Negotiation Process: A Model that Involves Natural Reasoning</Title> <Section position="2" start_page="102" end_page="102" type="metho"> <SectionTitle> 1 Model of Conversation Agent </SectionTitle> <Paragraph position="0"> In our model a conversation agent, A, is a program that consists of 6 (interacting) modules: A = (PL, PS, DM, INT, GEN, LP), where PL - planner, PS - problem solver, DM dialogue manager, INT - interpreter, GEN generator, LP - linguistic processor. PL directs the work of both DM and PS, where DM controls communication process and PS solves domain-related tasks. The task of INT is to make semantic analysis of partner's utterances and that of GEN is to generate semantic representations of agent's own contributions. LP carries out linguistic analysis and generation.</Paragraph> <Paragraph position="1"> Conversation agent uses in its work goal base GB and knowledge base KB. In our model, KB consists of 4 components: KB = (KBw, KBL, KBD, KBs), where KBw contains world knowledge, KBL linguistic knowledge, KBD - knowledge about dialogue and KBs - knowledge about interacting subjects. For instance, KBD contains definitions of communicative acts, turns and transactions (declarative knowledge), and algorithms that are applied to reach communicative goals communicative strategies and tactics (procedural knowledge); KBs contains knowledge about evaluative dispositions of participants towards the world (e.g. what do they consider as pleasant or unpleasant, useful or harmful), and, on the other hand, algorithms that are used to generate plans for acting on the world.</Paragraph> <Paragraph position="2"> A necessary precondition of a communicative interaction is existence of shared (mutual) knowledge of interacting agents. This concerns goal bases as well as all types of knowledge bases; the intersections of the corresponding bases of interacting agents A and B cannot be empty: GB g n GB a ~:~, KBAw n KBBw ~, KBAL n KBBL ~O, KI3AD n KBBD ~, KB ABS KBBs :.7~:~, KBBAs (&quot;h KBAs -~:~.</Paragraph> <Paragraph position="3"> In this paper we will consider a specific type of dialogue where the communicative goal of agent A is to get agent B to agree to carry out an action D - so-called agreement negotiation dialogue. We will concentrate here on dialogue management in such kind of interaction, i.e. on the functioning of the module DM.</Paragraph> </Section> <Section position="3" start_page="102" end_page="107" type="metho"> <SectionTitle> 2 Dialogue Management 2.1 Reasoning Model </SectionTitle> <Paragraph position="0"> A dialogue participant chooses his/her responses to the parter's communicative acts as a result of certain reasoning process. After A has made B a proposal to do D, B can respond with agreement or rejection, depending on the result of his/her reasoning.</Paragraph> <Paragraph position="1"> Because we consider the model of natural human reasoning as one of the important components in attaining naturalness of dialogue as a whole, we will discuss our model of reasoning in some detail. From the point of view of practical NLP the approach we will present below may seem too abstract. But without solid theoretical basis it will appear impossible to guarantee naturalness of dialogues carried out by computers with human users. We think that the model we describe here can be taken as a basis for the corresponding discussion.</Paragraph> <Paragraph position="2"> Our model is not based on any scientific theory of how human reasoning proceeds; our aim is to model a &quot;naive theory of reasoning&quot; which humans follow in everyday life when trying to understand, predict and influence other persons' decisions and behavior, see Koit and C)im (2000). The reasoning model consists of two functionally linked parts: 1) a model of human motivational sphere; 2) reasoning schemes.</Paragraph> <Paragraph position="3"> In the motivational sphere three basic factors that regulate reasoning of a subje, ct concerning D are differentiated. First, subject may wish to do D, if pleasant aspects of D for him/her overweight unpleasant ones; .second, subject may fmd reasonable to do D, if D is needed to reach some higher goal, and useful aspects of D overweight harmful ones; and tlfird, subject can be in a situation where he/she must (is obliged) to do D - if not doing D will lead to some kind of punishment. We call these; factors wish-, needed- and must-factors, respectively.</Paragraph> <Paragraph position="4"> For instance, in reasoning about some action D (e.g. proposed by another agent), an agent as an individual subject typically starts with checking his/her wish-factor, i.e. whether D's pleasant aspects overweight unpleasant ones. If this holds, then the subject checks his/her resources, and if these exist, proceeds to other positive and negative aspects of D: its usefulness and harmfulness, and if D is prohibited, then also possible punishment(s). If the positive aspects in sum overweight negative ones, the resulting decision will be to do D, otherwise - not to do D.</Paragraph> <Paragraph position="5"> There can exist other typical situations. If the agent is an &quot;official&quot; person, or a group of subjects formed to fulfil certain tasks and/or to pursue certain pre-established goal(s), then typically the starting point of reasoning is needed- and/or mast-factor.</Paragraph> <Paragraph position="6"> This means that there exist certain general principles that determine how the reasoning process proceeds. These principles depend, in part, on the type of the reasoning agent. Before starting to construct a concrete reasoning model the types of agents involved should be established. In our implementation the agent is supposed to be a &quot;simple&quot; human being and the actions under consideration are from everyday life. In this case as examples of such principles used in our model we can present the following ones. For more details, see Oim (1996).</Paragraph> <Paragraph position="7"> P1. People prefer pleasant (more pleasant) states to unpleasant (less pleasant) ones.</Paragraph> <Paragraph position="8"> P2. People don &quot;t take an action of which they don't assume that its consequence will be a pleasant (useful) situation, or avoidance of an unpleasant (harmful) situation.</Paragraph> <Paragraph position="9"> The following principles illustrate more concrete (operational) rules.</Paragraph> <Paragraph position="10"> P3. In assessing an action D the values of (internal - wish- and needed-)factors are checked before the external (must-) factors.</Paragraph> <Paragraph position="11"> P4. If D is found pleasant enough (i.e. D's pleasant aspects overweight unpleasant ones), then the needed- and must-factors will first be checked from the point of view of their negative aspects (&quot;to what harmful consequences or punishments D would lead? &quot;~).</Paragraph> <Paragraph position="12"> The rule P4 explains, for example, why in Figure 1 step 1 is immediately followed by step 2.</Paragraph> <Paragraph position="13"> The weights of different aspects of D (pleasantness, unpleasantness, usefulness, harmfulness, punishment for doing a prohibited action or not-doing an obligatory action) must be summed up in some way. Thus, in a computational model weights must have numerial values. In reality people do not operate with numbers but, rather, with some fuzzy sets. On the other hand, existence of certain scales also in human everyday reasoning is apparent.</Paragraph> <Paragraph position="14"> For instance, for the characterisation of pleasant and unpleasant aspects of some action there are specific words: enticing, delighOCul, enjoyable, attractive, acceptable, unattractive, displeasing, repulsive etc. Each of these adjectives can be expressed quantitatively. This presupposes empirical studies, though.</Paragraph> <Paragraph position="15"> We have represented the model of motivational sphere by the following vector of weights: w A = (w(resourcesAol), w(pleasAm), w(unpleasAm), w(useAm), w(harmAm), w(obligatoryAm), w(prohibitedAm), w(punishgm), w(punishAnot.Di),..., w(resourcesADn), w(pleasAon), w(unpleasADn), W(useAD~), w(harmAo,), w(obligatoryAD,), w(prohibitedADn), w(punishAm), W(punishAnot.Dn)).</Paragraph> <Paragraph position="16"> Here D~, ..., Dn represent human actions; W(resourcesADi)=I, if A has resources necessary to do Di (otherwise 0); w(obligatoryAsi)=l, if Di is obligatory for A (otherwise 0); w(prohibitedADi)=l, if Di is prohibited for A (otherwise 0). The values of other weights are non-negative natural numbers.</Paragraph> <Paragraph position="17"> The second part of the reasoning model consists of reasoning schemas, that supposedly regulate human action-oriented reasoning. A reasoning scheme represents steps that the agent goes through in his reasoning process; these consist in computing and comparing the weights of different aspects of D; and the result is the decision to do or not to do D.</Paragraph> <Paragraph position="18"> Figure 1 presents the reasoning scheme that departs from the wish of a subject to do D.</Paragraph> <Paragraph position="19"> The scheme also illustrates one of the general principles referred to above. It explains the order the steps are taken by the reasoning agent: if a subject is in a state where he/she wishes to do D, then he/she checks first the harmful/useful aspects of D, and after this proceeds to aspects connected with possible punishments.</Paragraph> <Paragraph position="21"> If yes then to do D else not to do D.</Paragraph> <Paragraph position="22"> 9) Is D prohibited? If not then to do D.</Paragraph> <Paragraph position="24"> Are there enough resources If yes then to do D else not to do D.</Paragraph> <Paragraph position="25"> from the wish of a subject to do D. The prerequisite for triggering this reasoning procedure is w(pleas) > w(unpleas), which is based on the following assumption: if a person wishes to do something, then he/she assumes that the pleasant aspects of D (including its consequences) overweigh its unpleasant aspects. The same kinds of reasoning schemes are constructed for the needed- and must-factors. The reasoning model is connected with the general model of conversation agent in the following way. First, the planner PL makes use of reasoning schemes and second, the KBs contains the vector w A (A's subjective evaluations of all possible actions) as well as vectors w AB (A's beliefs concerning B's evaluations, where B denotes agents A may communicate with). The vector w As do not represent truthful knowledge, it is used as a partner model.</Paragraph> <Paragraph position="26"> When comparing our model with BDI model, then belier are represented by knowledge of the conversation agent with reliability less than 1; desires are generated by the vector of weights WA; and intentions correspond to goals in GB. In addition to desires, from the weights vector we also can derive some parameters of the motivational sphere that are not explicitly covered by the basic BDI model: needs, obligations and prohibitions. Some wishes or needs can be stronger than others: if w(pleasADi) - W(unpleasAoi) > w(pleasAoj) - w(unpleasAt~), then the wish to do Di is stronger than the wish to do Dj. In the same way, some obligations (prohibitions) can be stronger than others, depending on the weight of the corresponding punishment. It should be mentioned that adding obligations to the standard BDI model is not new. Traum and Allen (1994) show how discourse obligations can be used to account in a natural manner for the connection between a question and its answer in dialogue and how obligations can be used along with other parts of the discourse context to extend the coverage of a dialogue system.</Paragraph> <Section position="1" start_page="104" end_page="106" type="sub_section"> <SectionTitle> 2.2 Communicative Strategies and Tactics </SectionTitle> <Paragraph position="0"> Knowledge about dialogue KBD, which is used by the Dialogue Manager, consists of two functional parts: knowledge of the regularities of dialogue, and rules of constructing and combining speech acts.</Paragraph> <Paragraph position="1"> The top level concept of dialogue rules in our model is communicative strategy. This concept is reserved for such basic communication types as information exchange, directive dialogue, phatic communication, etc. On the more concrete level, the conversation agent can realise a communicative strategy by means of several communicative tactics; this concept more closely corresponds to the: concept of communicative strategy as us~l in some other approaches, see e.g. Jokinen (1996). In the case of directive communication (which is the strategy we are interested in) the agent A can use tactics of enticing, persuading, threatening. In the case of enticing, A stresses pleasant aspects, in the case of persuading - usel~ aspects of D for B; in the case of ordering A addresses obligations of B, in the case of threatening A explicitly refers to possible punishment for not doing D.</Paragraph> <Paragraph position="2"> Which one of these tactics A chooses depends on several factors. There is one: relevant aspect of human-human communication which is relatively well studied in pragmatics of human communication and which we have included in our model as the concept of communicative space.</Paragraph> <Paragraph position="3"> Communicative space is defined by a number of coordinates that characterise the relationships of participants in a communicative encounter.</Paragraph> <Paragraph position="4"> Communication can be collaborative or confrontational, personal or impersonal; it can be characterised by the social distance between participants; by the modality (friendly, ironic, hostile, etc.) and by intensity (peaceful, vehement, etc.). Just as in case of motivations of human behaviour, people have an intuitive, &quot;naive theory&quot; of these coordinates. This constitutes a part of the social conceptualisation of communication, and it also should not be ignored in serious attempts to model natural communication in NLP systems.</Paragraph> <Paragraph position="5"> In our model the choice of a communicative tactics depends on the &quot;point&quot; of the communicative space in which the participants place themselves. The values of the coordinates are again given in the form of numerical values. The communicative strategy can be presented as an algofithra (Figure 2).</Paragraph> <Paragraph position="6"> Figure 3 presents a tactic of enticement.</Paragraph> <Paragraph position="7"> In our model there are three different communicative tactics that A can use within the frames of the directive communicative strategy: those of enticement, persuasion and threatening. Each communicative tactic constitutes a procedure for compiling a turn in the ongoing dialogue.</Paragraph> <Paragraph position="8"> i) Choose the communicative tactic.</Paragraph> <Paragraph position="9"> 2) Implement the tactic to generate an expression (inform the partner of the communicative goal).</Paragraph> <Paragraph position="10"> 3) Did the partner agree to do D? If yes then finish (the communicative goal has been reached).</Paragraph> <Paragraph position="11"> 4) Give up? If yes then finish (the communicative goal has not been reached).</Paragraph> <Paragraph position="12"> 5) Change the communicative tactic? If yes then choose the new tactic.</Paragraph> <Paragraph position="13"> 6) Implement the tactic to generate an expression. Go to initiator of communication.</Paragraph> <Paragraph position="14"> i) If wB(resources)=0 then present a counterargument in order to point at the presence of possible resources or at the possibility to gain them.</Paragraph> <Paragraph position="15"> 2) If w s(harm) > w as(harm) then present a counterargument in order to downgrade the value of harm.</Paragraph> <Paragraph position="16"> 3) If wB(obligatory)=l & w B (punish .... o) < w~ (punishno~-~) then present a counterargument in order to decrease the weight of the punishment.</Paragraph> <Paragraph position="17"> 4) If wB (prohibited) =l & w ~(punis~) > w ~(punis~) then present a counterargument in order to downgrade the weight of the punishment.</Paragraph> <Paragraph position="18"> 5) If wB(unpleas) > w~(unpleas) then present a counterargument in order to downgrade the value of the unpleasant aspects of D.</Paragraph> <Paragraph position="19"> 6) Present a counterargument in order to stress the pleasant The tactic of enticement consists in increasing B's wish to do D; the tactic of persuasion consists in increasing B's belief of the usefulness of D for him/her, and the tactic of threatening consists in increasing B's understanding that he/she must do D.</Paragraph> <Paragraph position="20"> Communicative tactics are directly related to the reasoning process of the partner. IrA is applying the tactics of enticement he/she should be able to imagine the reasoning process in B that is triggered by the input parameter wish. If B refuses to do D, then A should be able to guess at which point the reasoning of B went into the &quot;negative branch&quot;, in order to adequately construct his/her reactive turn.</Paragraph> <Paragraph position="21"> Analogously, the tactic of persuasion is related to the reasoning process triggered by the neededparameter, and the threatening tactic is related to the reasoning process triggered by the mustparameter. For more details see, for example, Koit (1996), Koit and 0im (1998), Koit and Oim (1999).</Paragraph> <Paragraph position="22"> Thus, in order to model various communicative tactics, one must know how to model the process of reasoning.</Paragraph> </Section> <Section position="2" start_page="106" end_page="106" type="sub_section"> <SectionTitle> 2.3 Speech Acts </SectionTitle> <Paragraph position="0"> The minimal communicative unit in our model is speech act (SA). In the implementation we make use of a limited number of SAs the representational formalism of which is flames.</Paragraph> <Paragraph position="1"> Figure 4 presents the frame of SA Proposal in the context of co-operative interaction. Other SAs are represented in the same form. Each SA contains a static (declarative) and a dynamic (procedural) part. The static part consists of preconditions, goal, content (immediate act) and consequences. The dynamic part is made up from two kinds of procedures: 1) those that the author of the SA applies in the generation of a communicative turn that contains the given SA; 2) those that the addressee applies in the process of response generation.</Paragraph> <Paragraph position="2"> As one can see, such a two-part representation contains also rules for combining SAs in a turn, and on the other hand, guarantees coherence of turn-takings: when we have tagged in KBD initiating SAs (such as Question or Proposal), then the following chain of SAs follows from the interpretation-generation procedures as applied by participants.</Paragraph> <Paragraph position="3"> PROPOSAL (author A, recipient B, A proposes B to do an action D) I. Static part SETTING (i) A has a goal G (2) A believes that B in the same way has the goal G (3) A believes that in order to reach G an instrumental goal G i should be reached (4) A believes that B in the same way believes that in order to reach G an instrumental goal G i should be reached (5) A believes that to attain the goal Gi B has to do D (6) A believes that B has resources for doing D (7) A believes that B will decide to do D GOAL: B decides to do D CONTENT: A informs B that he/she wishes B to do D CONSEQUENCES (i) B knows the SETTING, GOAL and CONTENT (2) A knows that B knows the SETTING, GOAL and CONTENT II. Dynamic part possibilities to build his/her turn that contains Proposal as the dominant SA).</Paragraph> <Paragraph position="4"> A has Goal G; A believes that B also has Goal G; A believes that in order to reach G, Gi should be reached; A has decided to formulate this as Proposal to B to do D.</Paragraph> <Paragraph position="5"> the turn) consist in checking whether the preconditions of proposal hold and in making decisions about information to be added in the turn: - in case of (2) : is G actualised in B? If not, then actualise it by adding SA Inform; - in case of (4) : does B believe that in order to reach B, G~ should be reached first. If not, then add SA</Paragraph> </Section> <Section position="3" start_page="106" end_page="107" type="sub_section"> <SectionTitle> Explanation (Argument); </SectionTitle> <Paragraph position="0"> - in case of (6) : if A is not sure that B has resources for D, then add Question; - in case of (7) : if A is not sure that B will agree to do D (for this A should model B's reasoning) , then add Argument.</Paragraph> <Paragraph position="1"> generation (B's possibilities to react to proposal) are started after B has recognised SA Proposal: - in case of (2), (4), (5) : if B does not have Goal G and/or he/she does not haw~ the corresponding beliefs and A has not provided the needed additional information, then add Question (ask for additional information) ; - in case of (6) : if B does not have Resources for D, then Reject + Argument; - in case of (7) : if the decision of B to do D (as the result of the application of reasoning scheme (s)) is negative, then Reject + Such a representation does not guarantee coherence of dialogic encounters (transactions) on a more general level. For instance, it does not cover such phenomena as topic change, inadequate responses caused by misunderstandings; but, more importantly, also various kinds of initiative overtakings. For instance, after rejecting the Proposal made by A, B can, in addition to explaining the rejection by Argument, initiate various &quot;compensatory&quot; communicative activities. Such things are normal in human co-operative interaction and they are regulated by general pragmatic principles that require from participants, in addition to being co-operative and informative, also being considerate and helpful. In our case this means that KBo should also include general level dialogue scenarios (in the form of a graph) and formalisations of the mentioned pragmatic principles; for an example of the latter, see Jokinen (1996).</Paragraph> </Section> </Section> <Section position="4" start_page="107" end_page="108" type="metho"> <SectionTitle> 3 Process of Dialogue </SectionTitle> <Paragraph position="0"> Let us describe the case where both A and B are intelligent agents; i.e. computer programs.</Paragraph> <Paragraph position="1"> 1. A constructs a) the frame exemplar of D, putting in it all relevant information A has about D; b) the model of partner B, putting in it all relevant information it has about B's evaluations concerning the contents of the slots in D's frame.</Paragraph> <Paragraph position="2"> 2. A chooses the point in communicative space from which it intents to start the interaction. 3. A starts to apply communicative strategy. A models B's reasoning process, using B's model. First A applies the reasoning scheme based on the wish of B. If it results in 'to do D', then A actualises the tactic of enticing and generates its first turn which contains a frame exemplar of Proposal. If the result of modelled reasoning results in 'not to do D', then A tries reasoning which starts from needed-factor and then the one triggered by must-factor, and according to the result actualises tactics of persuading or threatening, and generates the first utterance. If the application of all reasoning schemes results in 'not to do D', then A abandons its goal.</Paragraph> <Paragraph position="3"> 4. B interprets A's turn and recognises Proposal in it. B constructs it's the exemplar representation of D (this may not coincide with that of A). B starts reasoning, in the course of which it may need additional information from A. On the basis of the frame of Proposal B formulates the result of reasoning as its response turn: yes/no + (maybe) Argument.</Paragraph> <Paragraph position="4"> 5. A interprets B's answer and determines which point in the dialogue scenario this corresponds to. If B's answer was positive (decision to do D), then according to communicative strategy the encounter has come to its successful end. If B's answer is negative, then according to the dialogue scenario A must formulate a (counter)Argument. The communicative strategy also allows to choose a new point in communicative space and/or a new merle.</Paragraph> <Paragraph position="5"> To formulate the counter-argument, A uses information from the exemplar of D (it may be updated on the basis of B's negative answer) and its model of B (which it had to change because of B's negative answer). A models anew B's reasoning, i.e. the process is repeated cyclically.</Paragraph> </Section> <Section position="5" start_page="108" end_page="108" type="metho"> <SectionTitle> 4 Dialogue examples </SectionTitle> <Paragraph position="0"/> <Section position="1" start_page="108" end_page="108" type="sub_section"> <SectionTitle> 4.1 Example 1 </SectionTitle> <Paragraph position="0"> The example represents a dialogue where the computer plays A's role and is implementing the tactic of enticement. The user implements the simple tactic: point out little pleasantness and unpleasantness of the action.</Paragraph> <Paragraph position="1"> Let us suppose that the action D is &quot;travel to Venice and conclude a contract there&quot;. The computer has chosen the tactics of enticement and has generated the following user model:</Paragraph> <Paragraph position="3"> The reasoning procedure WISH on this model yields a positive decision. A following dialogue was generated (translated from Estonian): C(omputer): Do you want to travel to Venice? Our firm needs to conclude a contract there.</Paragraph> <Paragraph position="4"> The computer informed the user about its communicative goal.</Paragraph> <Paragraph position="5"> U(ser) : It.is not interesting for me.</Paragraph> <Paragraph position="6"> The user pointed out little pleasantness of the action. The computer must correct win(pleas) in the user model. The new value of wAS(pleas) will be 8.</Paragraph> <Paragraph position="7"> C: You can meet interesting people.</Paragraph> <Paragraph position="8"> The computer chose its reply from the class of expressions for increasing pleasantness. After this turn the value ofwaB(pleas) will be 9.</Paragraph> <Paragraph position="9"> U: What's so pleasant about this trip? The user pointed out little pleasantness once more. The corrected value of wAS(pleas) will be 8.</Paragraph> <Paragraph position="10"> C: There are many nice men in Italy.</Paragraph> <Paragraph position="11"> After this turn wAB(pleas)=9.</Paragraph> <Paragraph position="12"> U: This trip is too intense for me.</Paragraph> <Paragraph position="13"> The user indicated unpleasantness of D. The corrected value for wm(unpleas) will be 5.</Paragraph> <Paragraph position="14"> C: The company will pay all your expenses.</Paragraph> <Paragraph position="15"> The computer chose its reply from the class of expressions for decreasing unpleasantness and supposes that after its reply the value of wAa(unpleas) will be 4.</Paragraph> <Paragraph position="16"> U: I don &quot;t like to travel.</Paragraph> <Paragraph position="17"> The user indicated unpleasantness of the action once more. Thus, the new value of wAB(unpleas) will be 5.</Paragraph> <Paragraph position="18"> C: You can take plane.</Paragraph> <Paragraph position="19"> The computer decreased the unpleasantness of D once more. The new value of wAB(unpleas) is 4. U: You are right - I shall travel.</Paragraph> <Paragraph position="20"> The user agreed to do D, the communicative goal of the computer is achieved.</Paragraph> <Paragraph position="21"> C: I am glad.</Paragraph> </Section> </Section> <Section position="6" start_page="108" end_page="109" type="metho"> <SectionTitle> 4.2 Example 2 </SectionTitle> <Paragraph position="0"> The example represents a dialogue where the computer is implementing the tactic of threatening. The user implements the tactic as before: point out little pleasantness and unpleasantness of the action.</Paragraph> <Paragraph position="1"> The action D is &quot;travel to Venice and conclude a contract there&quot; as before. The computer has chosen the tactic of threatening and has generated the following user model:</Paragraph> <Paragraph position="3"> WAB(obligatory)=l, wAa(prohibited)=0, wAB(puniShD)=0, w~(punishnot.D)=8.</Paragraph> <Paragraph position="4"> The reasoning procedure MUST on this model yields a positive decision thus threatening is possible. A following dialogue was generated: C: You must travel to Venice. Our firm needs to conclude a contract there.</Paragraph> <Paragraph position="5"> U: It is not interesting for me.</Paragraph> <Paragraph position="6"> The user pointed out little pleasantness of the action. The computer must correct wAB(pleas) in the user model. The new value of wAB(pleas) will be 0.</Paragraph> <Paragraph position="7"> C: There are many people interested on getting your job.</Paragraph> <Paragraph position="8"> Using the tactic of threatening the computer chose its reply from the class of expressions for increasing punishment for not doing D. After this turn the value of wAB(punish~ot.D) will be 9. U: What ~ so pleasant about this trip?.</Paragraph> <Paragraph position="9"> The user pointed out little pleasantness once more.</Paragraph> <Paragraph position="10"> C: Refusing will be harmful for your reputation. After this turn WAB(punishnot.D)=10.</Paragraph> <Paragraph position="11"> U: This trip is too intense for me.</Paragraph> <Paragraph position="12"> The user indicated unpleasantness of D. The corrected value for waB(unpleas) will be 14. C: lf you have money you can travel by plane. The computer chose its reply fi:om the class of expressions for decreasing unpleasantness. The value ofwAS(unpleas) will be 13.</Paragraph> <Paragraph position="13"> U: 1don &quot;t like to travel The user indicated unpleasantness of the action once more.</Paragraph> <Paragraph position="14"> The reasoning procedure MUST on the user model will give the negatiw~ decision thus threatening is impossible. The computer can't choose new tactics because reasoning procedures WISH and NEEDED will give on the user model the negative decision too. The computer must give up.</Paragraph> <Paragraph position="15"> C: 1am sorry.</Paragraph> <Paragraph position="16"> Conclusion At present there exists implemented program which can play the role of both A and B in a simple communication situation where the goal of A is that B would decide to do D. At the moment the computer operates with semantic representations of linguistic input/output only, the surface linguistic part of interaction is provided in the form of a list of possible utterances. The work on linguistic processor is in progress.</Paragraph> <Paragraph position="17"> We have deliberately concentrated on modelling the processes of reasoning of conversation agents, as these processes form the heart of the &quot;cognitive&quot; part of human communication, and on modelling the use of communicative strategies and tactics which constitute the &quot;social&quot; part of communication.</Paragraph> <Paragraph position="18"> Although the concepts and models we have reported in the paper may seem too abstract from the point of view of practical NLP, we are convinced that without serious study and modelling of cognitive and social aspects of human communication it will appear impossible to guarantee naturalness of dialogues carried out by a computer system with a human user.</Paragraph> <Paragraph position="19"> As we have so far mostly dealt with agre ment negotiation dialogues, we have planned as one of the practical applications of the system as a participant in communication training sessions. Here the system can, for instance, establish certain restrictions on argument types, on the order in the use of arguments and counterarguments, etc.</Paragraph> <Paragraph position="20"> Second, we have started to work, using our experience in modelling cognitive and social aspects of dialogue, on modelling information seeking dialogues in the same lines. This type of dialogue clearly will be the area where in the next few years already systems will be required that would be practically reliable, but at the same time could follow the rules of natural human communication.</Paragraph> </Section> class="xml-element"></Paper>