File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/94/c94-2196_metho.xml
Size: 19,496 bytes
Last Modified: 2025-10-06 14:13:47
<?xml version="1.0" standalone="yes"?> <Paper uid="C94-2196"> <Title>l)iscourse and Deliberation: Testing a Collaborative Strategy</Title> <Section position="3" start_page="0" end_page="0" type="metho"> <SectionTitle> MKI'I()NAI,LY RE\[)\[ INI)ANT UTTERANf:E, llencel()rth IRU, </SectionTitle> <Paragraph position="0"> which iue surprisingly frequent in naturally-occurring dialogue \[Walker, 19931.</Paragraph> <Paragraph position="1"> A Warrant IR/J such a.S that in 2 suggests tllat B's cognilive limitations \[nay be a factor in what A cllooses to say, so thai even if B knows a wlu'rant for adopting A's proposed, what is critical is whetlm,' the warrant is salient lot B, i.e. whether the warnmt is ah'eady accessible in B's working memory \[Prince, 1981; Baddelcy, 1986\]. 11 the w~u'rant is not already salient, then B must either infer or retrieve the wlurant information or obtain it from an external source in order to evaluate A's proposed. Thus A's slrategy choice may depend oil A's model of B's alteutional stale, as well as the costs of retrieval ~md inference as opposed to communication. In other words, A may decide thal it is e~Lsier to just say the warrant rather than require B to infer or retrieve it.</Paragraph> <Paragraph position="2"> Finally, the task determines whether there are penalties lor leaving a w~ur~mt implicit and relying on B 1o inter or relrieve it. Some tasks require that two agents agree oil tile reasons lot adopting a proposal, e.g. in order to ensure robustness in situations of environmental change.</Paragraph> <Paragraph position="3"> ()ther tasks, such ~ts a mmmgement/union negotiation, only require the agents to agree on the actions to be cluried out and each agent can have its own reasons lot wanting tl~ose actions to be done withoul affecting success in the lask.</Paragraph> <Paragraph position="4"> Figure 1 summ~uizes these hypotheses by proposing a hypothetical decision lree for an agent's choice of whetlmr to use the Explicit-W~n-rlmt strategy. The choice is hypothesized to depend oil cognitive properties of B, e.g. what B knows, B's anentiomd state, lind B's processing capabilities, as well as properties of the tiksk and the communication cl~annel. To my knowledge, all previous work on di~dogne has simply assumed that ,'m agent should never tell ~m agen! facts that the other agent already knows. The hypotheses in figure 1 seem coinpie\[ely plausible, but the relationship of cognitive effort to dildogue behavior has never been explored. Given these hypotheses, what is required is a way to test the hypothesized relationship oft~tsk and cognitive factors to effective discourse strategies. Section 3 describes a new method for testing hypotheses about ellective discourse stralegies in dialogue.</Paragraph> </Section> <Section position="4" start_page="0" end_page="1207" type="metho"> <SectionTitle> 3 Design.World </SectionTitle> <Paragraph position="0"> Design-World is an experimentld enviro|unent for testiug the relationship hetween discourse strategies, task p~u'ameters ,and agents' cognitive capabilities, similar to the single ,agent TileWorld simnlalion environment \[Pollack and Ringuette, 1990; Hanks et al., 199311.</Paragraph> <Paragraph position="1"> Design-World agents can be parametrized as to discourse strategy, and tilt elfecls of this strategy can he me~Lsured against a |'imge of cognitive and task p~u'ameters. This paper compares \[l~e Explicit-Winrant strategy to the All-lmplicil strategy as strategies lot supporting deliberation. Other strategies tested in Design-World me presented elsewhere \[W~dker, 1993; Walker, 1994a; Riunbow ~md Walker, 19941.</Paragraph> <Section position="1" start_page="0" end_page="1206" type="sub_section"> <SectionTitle> 3.1 Design World l)omain and task </SectionTitle> <Paragraph position="0"/> <Paragraph position="2"> Task: Represents the Collaborative Plan Achieved by the Dialogue, 434 points Tile Design-World task requires two agents to carry out a dialoguein order to negotiate ;m agreement oil the design of the lloor plml of a two room house \[Whittaker et al., 199311. The DI!SI(;N-IlOIJSE plan requires the agents to agree on how to DF, SI(;N-ROOM-1 and I/I!SI{;N-ROOM-2.</Paragraph> <Paragraph position="3"> Both agents know whal tile I)ESI(;N-IIOUSE pl~ requires and stm'l out with a set of fnmiture pieces that can he used to design each room.</Paragraph> <Paragraph position="4"> To negotiale all ;-tgxec.lllt3nt, each agetll c:uTics ()tit means-end reasoning itl)oul tile furniture pieees Ihat Ihey hilve thal Call be used izl lhe lloof plan. Means-elld reasoning generates OPTIONS these options are tile content o1: I'I{OI'()SAI2q Io Ihe other agent to 1,1 IT iI piece of fumitufeinlooneoftherooms. Dialogue 3 ilhlstratesagents' communication for part el desiguing loom- l, including bolh tile aflilicial lallgtlage lhaI the agents conlnitmicate wilh and a gloss generated li'om 1hal llmgl,age in italics: (3) 1 : ITIl,\[,: First, lel's put lhe green rug in the study. (p,olmSC agent, hill agenl.kinl optima. 10: pill acl (agent-bill ff3eet/ l~lg fclonl 1 )) 2: KIM: Nexl, lefs pill the .wee, htm t) there.</Paragraph> <Paragraph position="5"> 2:(propose agcnt-kim agent-bill {~plion-33: put-act (a~,entkim green lamp room-Ill 3: BILL: Then, leFs put the green cotwh in the study. (propose. agent-bill agcnt-kim option-45: put-act (agenr bill green Cmlch mOln- 1 )) 4: KIM: No. instead let's put in the purl)le ~ om:h. (rcjecl agcnt-kim agent bill o1~1ion-55: lnlt acl (agent kml purple couch l'OOlll-\[ )) ()n receiving a proposal, au agent dcliberalcs whclhcf I() ACCI!I'T ,:51 R10ECT tile proposal \[Doyle, 1992\]. As po-Ielllial warfants to suppofl deliberation, ~md It.) provide a wily of ohjeclively evaluating agents' l)erlbrm~mce, each piece of l:ufnitufe has a score. The score ira)positions lor all tile pieces of furniture are stored in both agents' memories al Ihe beginning of lhe dialogue.</Paragraph> <Paragraph position="6"> Agents RI{JI,'.( :'ra proposal if delibefalion leads lhenl 1o believe lhal lhey know t)l'a bellm option or if they believe the precondilions for tile proposal do not hold. The contoni of rt~jc.clions is dctcf,nined by the (:()I.I.AI;()RKI'IV){ PI.ANNIN(; PRIN('II'IJ{S, abslfacled frolll analyzing fotlf different types oF pt'oblem solving dialogues \[Walker and Whitlakef, 1990; Walker, 1994bJ. Per example, in 3-4 Kiln fejecls the proposal ill 3-3, and gives its her reason that oplion-S6 is a teenier-proposal, Pml)osals 1 aiRl 2 ~ue infeffed Io be implicilly Act 3.;HE\]) because they are not rejected \[Walker ~md Whiltaker, 1991); Walker, 19921. I1 a pfOl)OSal is A(:t'EI'Tt,.'I), eilhef implicitly or explicitly, then the oplion Ihat wits Ihe content of tile pfoposal hecollles a mtnlual intenlitm thai conlrihutes Io Ihe iinal design plan II'ower, 1984; Sidner, 1992I. A polenlial final design plan negolialed via a dialogue is shown in ligure 2.</Paragraph> </Section> <Section position="2" start_page="1206" end_page="1206" type="sub_section"> <SectionTitle> 3.2 Varying l)iscourse Slrategies </SectionTitle> <Paragraph position="0"> The l)esign-Wodd experimenls reporlcd here compare 1he All-hnplicit slratcgy with the ExpliciI-Wmrant strategy. Agcnls are paianmlrized for difl~:renl discot,rsc strategies by placing different expansions o1' discourse phms in their plan libfaries. I)iscoufsc plans m'e plans for I'i,'()I'OSAI,, RI{J\](C'\['It)N, A(:C\]!I&quot;I~.NCI(, (?I,ARIF\[CKI'R)N, ()I'I(NING alld t'I,()SIN(;. Tht3 only wtriations discussed here are variations in lhe expansions (51 I'ROI'OSAI ~'S.</Paragraph> <Paragraph position="1"> The All-lmplicil slralcgy is an explmsion of a dis-COIII'Se 151;Ill Io lllilke a I'R/)I'O.RAI., in which a PiU)I'OSAI. decomposes lrivially to lhe COl/llllut/icalive ;El of I'R.()-I'()SE. In dialogue 3, both l)esign-World agents conllnllnicale using the AIl-hnplicil slfategy, and file proposals m'c shown in utlerances 1, 2, lind 3. The All-hnplicil slfalcgy never includes warrants ill proposals, leaving it up 1(:5 the other ;:tgeill It) feltlove them fiom memofy.</Paragraph> <Paragraph position="2"> The Explieit-Wmf~mt slralegy expands lhe I'ROI'()SAI.</Paragraph> <Paragraph position="3"> discourse act to be a WAId<ANT followed by a PR()I'OSE utter~mee. Since agents already know tile point values for pieces of furniture, wm'rauts ~ue Idways IRUs ill lhe experimenls here. For exmnple, 4-1 is a WARRANT for the pn)posal ill 4+2: The n~unes of agents who use tile Explicit-Win-rant smltegy are a numbered version of tile siring &quot;IEI&quot; to help the experimenter keep track of tile simulation data tiles; IEI stlmds lor hnplicit acceptance, Explicil warrant, Implicit opening ~md closing.</Paragraph> <Paragraph position="4"> FI) \[: let: lhxtting in the green rug is worth 56.</Paragraph> <Paragraph position="5"> (say agcnt-iei agent ici2 bel.-10: scm'c (option..\[0: put-act (aEelli-iei green mg room- I ) 56)) 2: IEt: Then, let's i;,ut the green rug it* the study.</Paragraph> <Paragraph position="6"> (pmlmSe agent-iei agenViei2 {q~lion .10: put-acl (age,ll iei green tug romu-I )) 3: IEI2: Pulling ill lhe green laml~ is wol'th 55.</Paragraph> <Paragraph position="7"> (say agent-iei2 agent-ici bvl 34: score (optiOll 33: put-acl (agent ici2 ~,leell lamp room-1) 55) ) 4: IE12: Then, let's put the green lamp in lhe study.</Paragraph> <Paragraph position="8"> (Ismp-sc agcnl-iei2 agc.nt-ici optitm-.33: puVacI (aget~t-ici2 green lamp teen> 1 )) The fact lhal the green rile is worth 56 points suppolls deliberation about wlmlhet 1o adopt the intention of putting the green rug in the study. The I~xplicit--WalTall\[ sirategy models nalurally oceun'ing examples such as those in 2 because Ihe points illfofmalitm used by lhe hem'er to deliberate whether to accept or reject the proposal is already mulually believed.</Paragraph> </Section> <Section position="3" start_page="1206" end_page="1207" type="sub_section"> <SectionTitle> 3.3 Cognitive and 'lask i'arametevs </SectionTitle> <Paragraph position="0"> Seclion 2 inlfoduced a range of factors motivated by the coqms analysis that were hypothesized to determine when ExpliciI-Wm-fant is an efleelive slralegy.</Paragraph> <Paragraph position="1"> This seclion discusses how Design-World SUl)ports Ihc plu'iunetfization of these \[~ICIOfS.</Paragraph> <Paragraph position="2"> The agent architecture R/r deliberalion and means-end reasoning is based on the IRMA architeclure, also used in the 'l-'ile, World simulation environment IPollack and R.inguclte, 19901, with Ihe addition of a model o1' lira-ile(l Allenlkm/Working IllellIOfy, AWM, \[Walker, 1993\] inchldes a fullef disctlssion of tile l)esig,>WorM deliberation and melms-end reasoning mechanism and Ihe underlying mechanisms assumed in collaborative planning. null We hypolhcsizcd lhal a warrant Inllsi be ,'qAIJENT for hoth agents (as shown by example 2). In l)esign-Wodd, salience is modeled by AWM model, adapted lronl \[Landauer, 1975 I. While the AWM model is extremely sin> pie, \[,andauer showed thai it could be pm'ameterized lo 1il many empirical resells on human memory and learning \[Baddeley, 19861. AWM consists of a three dimensional slsace in which propositions acquired Dora perceiving the world are stored in chronological sequence according 1o tile localion o1' a moving memory pointef. The sequence of memory loci iised lof slotage constitutes a fI.Itldolll walk lhrough memory wilh each loci a shorl dislance lfonl tile previous one. If items are encountefed illtllliple times, they me stored nmlliple limes \[Itinlzmann and Block, 1971\].</Paragraph> <Paragraph position="3"> Whell }Ill agellt rchieves ilenls 1}o111 inelllOfy, search slarls from tile current poinlef Iocalion and spreads out in a spherical fashion. Search is restricted to a particular se,'u'ch radius: radius is defined in Hiunming distance. For ex~unple if the current memory pointer loci is (0 0 (1), the loci dist~mce 1 away would be (0 1 0) (0 -1 0) (0 0 l) (0</Paragraph> <Paragraph position="5"> modulo the memory size. The limit on the search radius defines the capacity of attention/working memory and hence defines which stored beliefs and intentions are SAI ,IENT.</Paragraph> <Paragraph position="6"> Thc radius of tile search sphere in the AWM model is used as the par,'uneter for Design-World agents' resource-bound on attentional capacity. In the experiments below, memory is 16x16x16 and tile radius parmneler v~n'ies between 1 and 16, where AWM of 1 gives severely attention limited agents imd AWM of 16 means that everything an agent knows is accessible. 3 This parluneter lets us distinguish between an agent's ability to access all the inlimnation stored in its memory, lind the effort involved in doing so.</Paragraph> <Paragraph position="7"> The advantages of the AWM model is that it was shown to reproduce, in simulation, mlmy results on human memory ~md lem'ning. Because search st~n'ts from the current pointer location, items that have been stored most recently are more likely to be retrieved, predicting recency effects \[BMdeley, 19861. Because items that are stored in multiple locations are more likely to be retrieved, tbe model predicts fiequency effects \[Hiutzmmm and Block, 1971 I. Because items are stored in chrono-IogicN sequence, the model produces natural associativity effects \[Landaner, 19751. Because deliberation and means-end re~tsoning can only operate on salient belietls, limited attention produces a concomitlmt inl)rential limitation, i.e. if a belief is nol salient it cannot be used in deliberation or mcans-end-reltsoniug. This means that mistakes that agents make in their planning process have a plansible cognitive basis. Agents can both fail to access a belief that would idlow them to produce ,an optim~d plan, its well as make a mistake in pl~mning if a belief about bow the world has changed its a result of pllmning is not sldicnt. I)epending on the preceding discourse, and the agent's attentionld capacity, the propositions that im agent knows may or may not be salient when a proposed is made.</Paragraph> <Paragraph position="8"> Another hypothetical riveter was the relative cost of retrievld and communication. AWM ~dso gives ns a way to measure tile nmnber of retriev~ds from memory in terms of tile number of locations searched to find a proposition. The iunount of effort required li)r each retrieval step is a parmnetel, its is tile cost of each inference step lind the cost of each communicated message. These cost parluneters supporl modeling v~uious cognilive mchitectures, e.g. vmying tile cost of retrieved models different assumptions ahout memory. For example, if retrieval is free then ~dl items in working memory me instfmtly accessible, as they would be if they were stored in registers with litst p~uallel access. If AWM is set to 16, but relrievzd isn't free, tile model approximates slow spreading 3The size of memory was determined as adequate for pro ducing the desired level ~1' variation in tile current task across all the experimental variables, while slill making it possible to I'1.1 n a large iltlnlher el&quot; simtllalions over night wherl agents have access to all of their memory. In order Io use the AWM model in a different task, the experimenter might want to explore dilferent sizes lbr memory.</Paragraph> <Paragraph position="9"> activation that is quite effortful, yet the agent still has the ability to access ~dl of memory, given enough time. If AWM is set lower than 16 ~md retrievld isn't liee, then we model slow spreading activation with a timeout when e/fort exceeds a certain ~unount, so that im agent does not have the ability to access all of memory.</Paragraph> <Paragraph position="10"> It does not make sense to fix absolute v~dues li)r the retrievld, inference lind communication cost plu~uncters in relation to human processing. However, Design-World supports exploring issues about the relative costs of vlu'ions processes. These relative costs might v,'u'y depending,on the language that the agents are communicating with, properties of the communication channel, how smtu't tile agents ,arc, how much time they have, lind what the demands of the ti~sk are \[Norm~m and Bobrow, 1975\].</Paragraph> <Paragraph position="11"> Below we vary tile relative cost of communication and retrieval.</Paragraph> <Paragraph position="12"> Fin~dly, we hypothesized that the Explicit-Wm3'ant strategy may be beneficial if the relationship betweeu the wlm'imt and the proposed must be mutu~dly believed.</Paragraph> <Paragraph position="13"> Thus the delinition of success for the task is a Design-World plu','uneter: the Stimd~u'd task does not require a shared wlu'r~mt, whereas the Zero NonMatching Beliefs task gives a zero score to any negotiated plan without agreed-upon warrants.</Paragraph> </Section> <Section position="4" start_page="1207" end_page="1207" type="sub_section"> <SectionTitle> 3.4 Evaluating Perfiwmance </SectionTitle> <Paragraph position="0"> qb evlduate PERFOI),MANCti, We compare the Explicit-Winximt strategy with tbe All-Implicit strategy in situations where we vm'y the tltsk requirements, agents' attcntiotud capacity, lind the cost of retrieval, inference and communication. Evlduation of tile resulting I)ESIGN-IIOtJSli plan is p~u'~unctrized by (1) ('OMMCOST: COSt of sending a message; (2) INFCOST: cost of inference; and (3) RETCOST: COSt of retrieval from memory: simply summiu'ize the point values of the furniture pieces in each PUT-A('T ill the fined Design, while in tile Zero NonMatching Beliefs task, agents get no points for a pllm unless they agree on the reasons underlying each action that contributes to the plan.</Paragraph> <Paragraph position="1"> The way I'ERFORMANCI! is defined reflects the litct that agents m'e me,'mt to collaborate on the tlksk. The costs that ~u'e deducted liom tbe RAW SCORE are the costs for both agents' communication, inference, m~d retrieval. Thus PERFORMANCE is a measnre of LEAS'I&quot;</Paragraph> </Section> </Section> <Section position="5" start_page="1207" end_page="1207" type="metho"> <SectionTitle> COLI,ABORATIVE EFFORT \[Cl~u'k and Scbaeler, 1989; </SectionTitle> <Paragraph position="0"> Brennan, 19901. Since the par~uneters for cognitive eflk~rt iu'e fixed while discom'se strategy and AWM settings iue vltried, we can directly test the benefits of different discourse strategies under different assumptions ~d)ont cognitive effort and Ihe cognitive demands of the task.</Paragraph> <Paragraph position="1"> This is impossible to do with corpus imldysis alone.</Paragraph> <Paragraph position="2"> We simulate 100 dildogues at each pmiuncter setting lk)r each strategy. Differences in perfimnance distributions ~u'e ewduated l'or significance over the 100 dialogues using the Kohnogorov-Smimov (KS) two siunple test I Siegel, 1956 I.</Paragraph> <Paragraph position="3"> A strategy A is I~F.NI:,FiCIAI, as compmcd to a strategy B, for a sel of lixed parameter settings, if the diflerence in distributions using the Kobnogorov-Smirnov two sample test is signilicmlt at p < .05, in the positive direction, for two or more AWM settings. A slrategy is DETRI-MI!N'IAI, if tile differences go in tile negalive direction. Slratcgics may be neither BENF, FICIA1, or I)ETRIMF, NTAI,, as lhere may be no dillerence between two strategies.</Paragraph> </Section> class="xml-element"></Paper>