File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/96/c96-1056_metho.xml
Size: 28,142 bytes
Last Modified: 2025-10-06 14:14:10
<?xml version="1.0" standalone="yes"?> <Paper uid="C96-1056"> <Title>GRICE INCORPORATED Cooperativity in Spoken Dialogue</Title> <Section position="4" start_page="0" end_page="328" type="metho"> <SectionTitle> 2 Developing and Testing Principles of Cooperative Human-Machine Dialogue </SectionTitle> <Paragraph position="0"> The dialogue model for otu&quot; flight reservation system was developed by the Wizard of Oz (WOZ) experimental prototyping method in which a person simulates the system to be designed (Fraser and Gilbert 1991). Development was iterated until the dialogue model satisfied the design constraints on, i.a:, average user utterance length. The dialogues were recorded, transcribed, analysed and used as a basis for inaprovements oil the dialogue model. We perlormed seven WOZ iterations yielding a transcribed corpus of 125 task-oriented human-machine dialogues corresponding to approximately seven hours of spoken dialogue. The 94 dialogues that were recorded during the last two WOZ iterations were performed by external subjects whereas only system designers and colleagues had participated in the earlier iterations. A total of 24 different subjects were involved in the seven iterations. Dialogues were based on written descriptions of reservation tasks (scenarios).</Paragraph> <Paragraph position="1"> A major concern during WOZ was to detect problems of user-system interaction. We eventually used the following two approaches to systematically discover such problems: ( ) prior to each WOZ iteration we matched the scenarios to be used against the current dialogue model. The model was represented as a graph structure with system phrases in the nodes and expected contents of user answers along the edges. If a deviation from the graph occurred during the matching process, this would indicate a potential dia- null logue design problem which should be removed, il' possible. (ii) The recorded dialogues were plotted onto the graph representing the dialogue model. As in (i), graph deviations indicated potential dialogue design problems. Deviations were marked and their causes analysed whereupon the dialogue model was revised, if necessary.</Paragraph> <Paragraph position="2"> At the end of the WOZ design phase, we began a lllOl'e theoretical, forward-looking exercise. All the problenis of inleractioii uncovered dr!ring WOZ wore analysed and represented as violations of principles of cooperative dialogue. Each problem was considered a case in which the system, in addressing the user, iiad violated a principle of cooperative dialogue. The principles of cooperative dialogue were made explicit, based on the problems analysis. The WOZ corpus analysis led to the identification of 14 principles el: cooperative hun3an-machine dialogue (Section 4) based on analysis o1' 120 examples o1: user-systeill interaction problems. \[1: the principles were observed in the design of the system's dialogue behaviour, we assunled, this would serve to reduce the occurrence of user diah)gue behaviour that lhe sysiem had not been designed to handle.</Paragraph> </Section> <Section position="5" start_page="328" end_page="328" type="metho"> <SectionTitle> 3 Maxims and Principles of Cooperative </SectionTitle> <Paragraph position="0"/> <Section position="1" start_page="328" end_page="328" type="sub_section"> <SectionTitle> Dialogue </SectionTitle> <Paragraph position="0"> We had developed our principles of cooperative hunlan-nmchine dialogue indel)endently o1&quot; Gricean co-, operativity theory (Bernsen et al., 1996a). \])rior to the user test (Section 5), we colni)ared the principles with (h'ice's Cooperative Principle and maxims. In lhis process Ihe principles achieved their current lornl as shown in Table 1. Their original expression is presented in Section 4. Grice's Cooperative Principle (CP) is a general principle which says thai, to act cooperatively in conversation, oilc should make one's &quot;conversational contribution such as is required, at the stage at which it occtlrs, by tile accepted purpose or direction of the talk exchange in which one is engaged&quot; (Grice 1975). Grice proposes lhat the CP can be explicated in terms of four groups of simple maxires which are not claimed to be jointly exhaustive.</Paragraph> <Paragraph position="1"> The maxims are marked with an asterisk in Table 1.</Paragraph> <Paragraph position="2"> Grice focuses on dialogues in which the interlocutors want to achieve a shared goal (Grandy 1989, Sarangi and Slembrouck 1992). In such dialogues, lie claims, adherence to the maxims is rational because it enstu'es Ihal the interlocutors pursue the shared goal most efl'iciently. Task-oriented dialogue, such as that of our SLDS, is a paradigm case of shared-goal dialogue. Grico, however, did not develop the inaxinls with the purpose of preveuthlg coinmunication failure in shared-goal dialogue. Rather, his interest lies iri the inl~rences which an interlocutor is able to make when the speaker deliberately violates oue of the maxims.</Paragraph> <Paragraph position="3"> I\[te calls such deliberate speaker's messages 'conversational implicatures'. Grice's maxims, although having been conceived for a dilTerent purpose, nevertheless serve the same objective as do ()tit&quot; p,inciples, namely that of achieving the dialogue goal as directly and smoothly as possible, e.g. by preventing questions of claril\]cation. It is exactly when a hunian or, for that inatler, an SLDS, non-deliberately viohttes a maxim, that dialogue clarification problems are likely to occur. Thus, the main dil:ference between Grice's work and ours is that the maxims were developed to account for cooperalivity in hUlllall-hUlnal/ dialogue, whereas our principles were developed to aceoulll \['or cooperativity in hunlan-nmchhle dialogue. Givca the conllnonalily of purpose, it beconies of interest to conlpare principles and Illaxinls. We waut to show that the principles include the illaXilllS as a subset end thus provides a corpus-based confirmation of their validity for spoken human-machine dialogue. More(> vet', the principles manifest aspects of cooperative task-oriented dialogue which were not addressed by Grice.</Paragraph> </Section> </Section> <Section position="6" start_page="328" end_page="331" type="metho"> <SectionTitle> 4 Comparison between Maxims and </SectionTitle> <Paragraph position="0"/> <Section position="1" start_page="328" end_page="328" type="sub_section"> <SectionTitle> Principles </SectionTitle> <Paragraph position="0"> In this section we analyse the relationship between Grice's nmxiins and our principles el: dialogue cooperativity. A \[irst aim is to demonstrate that a sub-set of the principles are roughly equivalent to tile ntaxhns. We Ihen argue that the renlaining principles express additional aspects of cooperativity. The dislinclion between I, ri,lcQ)h&quot; andaxlWCt (Table l) is theoretically intportant because an aspect represents tile l)roperty o1' dialogue addressed by a particular maxim or prhlciple. One result of analysing Ihe rehltionship between principles and nlaxhns is the distinction, shown in the tables, 1)elween PS,eneric and specific principles. Grice's ulaxims are all generic. A generic principle lnay subsunle one or ltlore specific principles which specialise the generic principle to certain classes of phenomena. Although important to SIA)S design, specific principles may be less signil\]cant to a general account of dialogue cooperativity.</Paragraph> </Section> <Section position="2" start_page="328" end_page="330" type="sub_section"> <SectionTitle> 4.1 Principles which are Reducible to Maxims </SectionTitle> <Paragraph position="0"> Grice's nmxims of truth and evidence (GP3, (7I)4) have no coui/terparts aniong ()ttr t~,'inciples but inay simply be inchided among the principles. The reason is that one does not design an SLDS in the domain o1' air ticket reservation which provides l:alse or un founded information to cuslomers. In other words, the maxims of truth and evidence are so important to the design o1: SLI)Ss that they are unlikely to emerge duriug dialogue design problenl-solving. During systoni inll)lenlenlalion , one constantly worries about truth and evidence. It canuoi be allowed, for instance, that the system confirrns infornlatioll which has ilOl been checked with the database and which might be false or impossible. Grice (1975) observed the i:undanlental nature of the maxims of truth and evidence in general and GP3 in particuhir (of. Searle 1992).</Paragraph> <Paragraph position="1"> level of generality as are the Gricean maxims (marked with an *). Each specific principle is subsumed by a generic principle. The left-hand column characterises the aspect of dialogue addressed by each principle.</Paragraph> <Paragraph position="3"> *Make your contribution as informative as is required (for the current purposes of the exchange).</Paragraph> <Paragraph position="4"> Be fully explicit in communicating to users the commitments they have made.</Paragraph> <Paragraph position="5"> , ,.</Paragraph> <Paragraph position="6"> Provide feedback on each piece of information provided b~C/ the user. *Do not make ~our contribution more infornmtive than is required. *Do not say what you believe to be false.</Paragraph> <Paragraph position="7"> *Do not say that for which you lack adequate evidence. *Be relevant, i.e. Be appropriate to the immediate needs tit each stage of the transaction.</Paragraph> <Paragraph position="8"> *Avoid obscurity of expressio{I..</Paragraph> <Paragraph position="9"> *Avoid ambiguity:..</Paragraph> <Paragraph position="10"> ! Provide same formulation of the same question (or addre~)'to users every-where in the s},steln's dialo~uc, turns.</Paragraph> <Paragraph position="11"> *Be brief (avoid unnecessary/t~rolixit~/). .~.</Paragraph> <Paragraph position="12"> * Be orderly.</Paragraph> <Paragraph position="13"> Inform the dialogue partners of important non-normal chPSracteristics which they should take into account in order to behave cooperatively in diak)gue. Provide clear and comprehensible communication of what the system can and cannot do.</Paragraph> <Paragraph position="14"> I Provide clear and sttfficient instructions to users on how to interact with the system.</Paragraph> <Paragraph position="15"> 'Fake partners' relevant b.ack~round knowledge into account. Take into account possible (and possibly emmeous) user inferences by analogy from related task domains.</Paragraph> <Paragraph position="16"> Separate whenever possihle between tire needs of novice and expert users (user-adaptive dialogue).</Paragraph> <Paragraph position="17"> Take into account legitimate partner expectalions as to your own background knowlcdse.... ...</Paragraph> <Paragraph position="18"> Provide sufficient task domain knowledge and inference. Initiate repair or clarification lncta-connnunication in case of comlntlnication l'ailure.</Paragraph> <Paragraph position="19"> Provide ability to initiate repair it' s~/stem understandin~ has failed. Initiate clarification recta-communication in case of inconsistent user input. hfitiate clarification recta-communication in case of ambifzuous user input. The following principles have counterparts among the maxims: 1. Avoid 'semantical noise' in addressing users.</Paragraph> <Paragraph position="20"> (1) is a generalised version of GP6 (non-obscurity) and GP7 (non-ambiguity). Its infelicitous expression was due to the fact that we wanted to cover observed ambiguity and related phenomena in one principle but failed to find an appropriate technical term for the purpose. (I) may, without any consequence other than improved clarity, be replaced by GP6 and GP7.</Paragraph> <Paragraph position="21"> 2. Avoid superfluous or redundant interactions with users (relative to their contextual needs).</Paragraph> <Paragraph position="22"> (2) is virtually equivalent to GP2 (do not overdo inlormativeness) and GP5 (relevance). Grice observed the overlap between GP2 and GP5 (Grice 1975). (2) may, without any consequence other than improved clarity, be replaced by GP2 and GP5.</Paragraph> <Paragraph position="23"> 3. It should be possible for users to fully exploil the system's task domain knowledge when they need it.</Paragraph> <Paragraph position="24"> (3) can be considered an application o1' GPI (infer mativeness) and GP9 (orderliness), as follows. If the system adheres to GPI and GP9, there is a maximum likelihood that users obtain the task domain knowledge they need from the syslem when they need it. Tbe system should say enough and address the task-relevant dialogue topics in an order which is as close as possible to the order expected hy users. If the user expects some topic to come tip early in the dialogue, that topic's non-occurrence at its expected &quot;place&quot; may cause a clarification sub<lialogue which the systeln c;_innc)t understand. In WOZ Iteration 3, for instance, the system did not ask users about their interest in discount fare. Having expected the topic to come tip for seine time, users therefore began to inquire about discount when approaching lhe end of the reservalion dialogue. (3) may be replaced by GP 1 and GP9 without significant loss.</Paragraph> <Paragraph position="25"> 4. P, educe system lalk as nnich as possible during individual dialogue turns.</Paragraph> <Paragraph position="26"> (4) is near-equivalent to GP8 (brevity).</Paragraph> <Paragraph position="27"> Sunlmarising, the generic principles (1)-(4) may he replaced by maxims GPI, GP2 and GP5-GP9.</Paragraph> <Paragraph position="28"> These maxilns are capable of perforii/illg the same task in guiding dialogue design. In fact, as argtled, the maxims ai'o able to do the better job because they, i.e. GP6 and (IP7, aild GPI and GP9, respectively, spell Otll the illleilded coilteiits ill&quot; two of Ihe princilfles. This provides COl'ptis-l)ased Coilfiiilllit\[Oil I)\[&quot; lilUXililS GPI, (1152 and CII:'5-(;P9, i.e. of ttleir staling basic principles of cooperative, task-oriented hulliall-illachine dialogue. However, \[or dialogue design ptirpo-SOS, lhe illaXill/S illtlSt be augnlenled hy task-slJecl:/Tc or domain-sl)ecific princOHes, such as the \[bllowing. 5 (SP3). Provide same fornluhition of the same qucslion (or address) to users everywhere in the system's dialogue ttlrlls.</Paragraph> <Paragraph position="29"> (5) represents an additional lWCCaUtion against the occurrence of ambigttity in niachine speech. It can be soeii as a Sl)ccial-purpose application o1' GP'/ (ilonaulbiguity). null 6 (SPI). t~;e fully explicil in et)lllnlunicating to tlSel'S the COillillitlllents they have Illade, 7 (SP2). Provide feedback Oil e~lch piece of infor-Ination provided by lhe riser.</Paragraph> <Paragraph position="30"> Those principles are closely related. The novel coot> orativity aspoel they introduce is lhal they require the cooperative speaker to produce a specific dialogue contl'il~ution which explicitly expresses an intorprolalion of the intorlocuior's previous diah)guo conlribulion(s), provided Ihal the interlocutor has inado ;.l dialogue contribution of a certain lypo, such as a coninlitnlonl Io book a flight. We propose ihal these principles be suhsunlod by (il' 1 (infornialiveness).</Paragraph> </Section> <Section position="3" start_page="330" end_page="331" type="sub_section"> <SectionTitle> 4.2 Prindples hicldng Equivalenls lililOlil4 the Maxims </SectionTitle> <Paragraph position="0"> The principles discussed in this section appear irreducible to maxims and thus serve to augment the scope of a theory of cooperativity.</Paragraph> <Paragraph position="1"> Dialogue partner asynimctry occurs, roughly, when OllO or liloi'e of Iho dialogue partners is llot in a norlllal conditioll or situation, leer installCO, a dialogue partner may have a hoalitlg deficiency or be located in a particularly noisy environnlcnt, in such cases, dialogue cooperativity depends Oll the taking into accotlnt o\[' that participant's special characteristics. For obvious reasons, dialogue partner asynmietry is important in SI,DS dialogue design. The machine is not a nornml dialogue partner and users have to be aware of this if communicalion faihire is to be avoided. The following two principles address dialogue parlnor asynllnelry: 8 (SP4). Provide clear and comprehensible com-Inunication C/)1: what the system can and cannot do. 9 (SP5). I'rovide clear and sufficient instructions Io users oil hOW \[() interact with tile system.</Paragraph> <Paragraph position="2"> Being limitcd in its task capabilities and intended for walk-up-and-use application, our SLDS needs to protect itself from unmanageahlc dialogue contributions by providing users with an upq'ront mental model of what it can and cannot do. If this inclmtl model is too complex, uscrs will not acquire it; and il' the nlodcl is too situplistic> its remaining details must be provided elsewhere during dialogue. (8) adds an ilnportant clement to Ihc analysis of dialogue coopcrativily by aiming at inlproving user coopcrativily. It shows that, at least in hunlan-nlachinc dialogue, coopcraiivity is a fornmlly nlorc coniplcx pheuonlcnon than anticipated by Gricc. In addition to principles stating how a speaker should hehavc, principles are needed ticcording to which the speaker should consider transferring part of the responsibility for cooperation to the interlocutor. (9) has a role sitnihu&quot; to lhat of (8). The lnincil)lcs cxanlincd in this section inlroducc a new aspect o1 dialogue cooperativity, naniely parther asymmetry and speaker's consequent obligation to inform the partner(s) of non-nornml speaker characteristics, l)ue to lhe latter, the principles cannot be subsumed by any olhcr l)rinciplc or maxim. We propose Ihat (8) and (9) are both .vl~eci/k' princilJcs subsumed by a new generic pri,lcilfle: GPI0. hlfortn the dialogue parttlors of inll)Ot'tant ilOll-il(ll'lllal charactcrislics which they should take into accotilll in order tt) behave cooperatively in dialogue.</Paragraph> <Paragraph position="3"> 10 (GPII). Take users' relevant background knowledge into account.</Paragraph> <Paragraph position="4"> (;1511 is expressed at the level of generality of (h'icc's theory. The principle explicitly introduces two notions: the notion o1' interlocutors' background knowledge and that of possible dilTcrcnccs in background knowledge between dilTercnt user populations and individual users. (;P1 I appears to be presupposed by maxinm GPI, GP2 and G155-(;1'9 in the sense that it is not possible to adhc,'e to any of Ihese maxims without adhering to GPII. Moreover, in order to adhere to GPI I, it is necessary lkn&quot; the speaker to recognise relevant differences among inlerlocutors and interlocutor groups in Icrms o1' background knowledge.</Paragraph> <Paragraph position="5"> Based on this recognition, a speaker either ah'cady has built prior to the dialogue, or adaptively buikts during dialogue, a model o1' the interlocutor which serves to guide speaker coopcrativily. Increased user adaptivity in this sense is an important goal in SLDS design (Bernsen et al. 1994).</Paragraph> <Paragraph position="6"> GPI l cannot be reduced to GPI (informativeness) because, first, GP1 does not refer to the notions of background knowledge and differences in background knowledge among interlocutors. Second, a speaker may adhere perfectly to 'exchange purpose' (cf. GPI) while ignoring the interlocutor's background knowledge. For instance, in the user test a user wanted to order a one-way ticket at discount price. The system, however, knew that discount is only possible on return tickets. It therefore did not offer the discount option to this user nor did it correct the user's misunderstanding. At the end of the dialogue, the frustrated user asked whether or not discount had been granted. Third, as argued above, GPll is presupposed by maxims GPI, GP2 and GP5-GP9. Grice, however, does not argue that GP1 is presupposed by those maxims whereas he does argue that GP3 (truth) and GP4 (evidence) are presupposed by them (Grice 1975). For similar reasons, GP5 (relevance) (Sperber and Wilson 1987), cannot replace GPI 1. Informativeness and relewmce, therefore, are not only functions of the purpose(s) of the exchange of infornmtion but also of the knowledge of the interlocutor.</Paragraph> <Paragraph position="7"> 11 (SP8). Provide sufficient task domain knowledge and inference.</Paragraph> <Paragraph position="8"> (11) may appear trivial as supportive of the design of usable information service systems. However, designers of such systems are continuously confronted with questions about what the system should know and what is just within, or barely outside, the system's intended or expected donmin of expertise. The system should behave as a perfect expert vis-~>vis its users within its declared domain of expertise, otherwise it is at fault. In WOZ Iteration 7, for instance, a subject expressed surprise at not having been offered the option of being put on a waiting list in a case in which a flight was already fully booked. We became aware of the problem during the post-experimental interview. However, the subject might just as well have asked a question during the dialogue. Since ( 11 ) deals with speaker's knowledge, it cannot be subsmncd by GPll. We therefore propose to introduce a new generic principle which mirrors GP11: GPI2. Take into account legitimate partner expectations as to your own background knowledge.</Paragraph> <Paragraph position="9"> (11), then, is a specific principle subsumed by GPI2.</Paragraph> <Paragraph position="10"> 12 (SP6). Take into account possible (and possibly erroneous) user inferences by analogy fi'om related task domains.</Paragraph> <Paragraph position="11"> (12) is a specific principle subsumed by GP1 l (background knowledge). It was developed from examples of user mistmderstandings of the system due to reasoning by analogy. For instance, the fact that it is possible to make reservations of stand-by tickets on international ilights may lead users to conclude (erroneously) that this is also possible on domestic lqights. 13 (SP7). Separate whenever possible between the needs of novice and expert users (user-adaptive dialogue).</Paragraph> <Paragraph position="12"> (13) is another specifi'c principle subsumed by GPII.</Paragraph> <Paragraph position="13"> Interlocutors may belong to different populations with correspondingly different needs of information in cooperative dialogue. For instance, a user who has successftdly used the dialogue system on several occasions no longer needs to be introduced to the system but is capable of launching on the ticket reservation task right away. A novice user, however, will need to listen to the system's introdnction to itself. This distinction between the needs of expert and novice users was introduced in WOZ Iteration 7 when several users had complained that the system talked too much.</Paragraph> </Section> <Section position="4" start_page="331" end_page="331" type="sub_section"> <SectionTitle> 4.2.3 Meta-eommunication </SectionTitle> <Paragraph position="0"> Even if an SLDS is able to conduct a perfectly cooperative dialogue, it will need to initiate repair and clarification meta-communication whenever it has failed to understand the user, for instance because of speech recognition or language understanding failure: 14 (SP9). Provide ability to initiate repair il' system understanding has failed.</Paragraph> <Paragraph position="1"> (14) states what the cooperative speaker should do in case of failure to understand utterances made by the interlocutor. Our system adheres to (14) in that it communicates its failure to understand what the user just said. (14) cannot be subsumed by GPI (informativeness) which ignores communication faihne.</Paragraph> <Paragraph position="2"> Together with the new specific principles from the user test SPI0 and SP11 (Section 2), (14) is a specific principle of htmmn-machine dialogue which may be subsumed by: OPl 3. Initiate repair or clarification meta-communicatkm in case of communication failure.</Paragraph> </Section> </Section> <Section position="7" start_page="331" end_page="332" type="metho"> <SectionTitle> 5. The user test </SectionTitle> <Paragraph position="0"> The set of principles presented in Table 1, apart fi'om SP10 and SPII (see below), were tested against the nser test dialogue corpus (Dybkjmr et al., 1996). In the user test the speech recogniser was simulated whereas the other system components were the implemented ones, The system was tested with 12 external users. The 57 recorded and transcribed dialognes were scenario-based and covered the full functionality of the system. We systematically analysed the transcribed dialogues to detect those deviations between expected and actual user utterances that would signal problems of user-system interaction caused by non-cooperative dialogue design. An example is presented below of a piece of dialogue fi'om the user test in which two system cooperativity problems occur (S means system and U means user): S 1 : Do you want retnrn tickets'? UI: Yes please.</Paragraph> <Paragraph position="1"> $2: Are you particularly interested in discotmt? U2: Yes.</Paragraph> <Paragraph position="2"> $3: On which date will the journey start? U3: Monday.</Paragraph> <Paragraph position="3"> $4: Monday January 16. At which tin\]e? U4: 7:20.</Paragraph> <Paragraph position="4"> $5: There is no departure at 7:20. The closesl other red departure is at 10:50. Do you want this deparlure? US: No thanks.</Paragraph> <Paragraph position="5"> Each identified problem in the dialogue corpus was categorised according to which principle had been violated and described in terms of the sympton\] (S), a diagnosis (D) and a cure (C). Below is shown the description for the two problems in $5.</Paragraph> <Paragraph position="6"> GPI: Make your contrihution as inl'ormalive as is required (for the current purl\]oses o1 the exchange). null S: U: interested in discount (red) + out departure time at 7:20. S: no departure at 7:20.</Paragraph> <Paragraph position="7"> D: The system provides insufficient informalion. It does not tell that lhere is a blue departure at 7:20.</Paragraph> <Paragraph position="8"> C: The system should provide sufficient inforntalion, e.g. by telling that there is no red departure bul lhal lhere is a bh,e del)arture at the chosen hour.</Paragraph> <Paragraph position="9"> SPIll: Initiate clarification recta-communication in case of inconsistent user input.</Paragraph> <Paragraph position="10"> S: U: interested in discount (red) + out departure time at 7:20; S: no departure at 7:20. However, 7:20 does exist but wilhout discount.</Paragraph> <Paragraph position="11"> 1): S gives priority to discount over time without proper reason.</Paragraph> <Paragraph position="12"> C: S should ask U about priority: 7:20 is not a discount departure. Red discount can be oblained on the departures at x, y and z. Which departure do you want. \[flU provides a new departure time: S: do you still want discount. If U: no; S: nondiscount departures\[.</Paragraph> <Paragraph position="13"> It turned out that ahnost all of Ihe 86 system dialogue problems identified could be ascribed to violations of the cooperative principles (Bernsen et al., 1996b). We only had to add two specific f~rinciples o1' metacontmunication (SPI0 and SPII in Tahle 1). Since naeta-comnmnication had not been simulated during the WOZ experiments, this came as no SLUT)rise. The lollowing GPs and SPs were found violated at least once: GPs I, 3, 5, 6, 7, 10, I I, 12, 13 and SPs 2, 4, 5, 6,8, 10, II.</Paragraph> <Paragraph position="14"> The user lest confirmed the broad coverage of the principles with respect to cooperative, spoken user-system dialogue. Less flattering, of course, the test thereby revealed several deficiencies in our coopera-live dialogue design.</Paragraph> </Section> class="xml-element"></Paper>