File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/98/p98-2129_metho.xml
Size: 18,503 bytes
Last Modified: 2025-10-06 14:15:00
<?xml version="1.0" standalone="yes"?> <Paper uid="P98-2129"> <Title>Evaluating Response Strategies in a Web-Based Spoken Dialogue Agent</Title> <Section position="4" start_page="781" end_page="782" type="metho"> <SectionTitle> 3 Experimental Design </SectionTitle> <Paragraph position="0"> The experimental instructions were given on a web page, which consisted of a description of TOOT's functionality, hints for talking to TOOT, and links to 4 task pages. Each task page contained a task scenario, the hints, instructions for calling TOOT, anal a web survey designed to ascertain the depart and travel times obtained by the user and to measure user perceptions of task success and agent usability.</Paragraph> <Paragraph position="1"> Users were 12 researchers not involved with the design or implementation of TOOT; 6 users were randomly assigned to LT and 6 to CT. Users read the instructions in their office and then called TOOT from their phone. Our experiment yielded a corpus of 48 dialogues (1344 total tums; 214 minutes of speech).</Paragraph> <Paragraph position="2"> Users were provided with task scenarios for two reasons. First, our hypothesis was that performance depended not only on response strategy, but also on task difficulty. To include the task as a factor in our experiment, we needed to ensure that users executed the same tasks and that they varied in difficulty.</Paragraph> <Paragraph position="3"> Figure 2 shows the task scenarios used in our experiment. Our hypotheses about agent performance are summarized in Table 1. We predicted that optimal performance would occur whenever the correct task solution was included in TOOT' s initial re-Task 1 (Exact-Match): Try to find a train going to Boston from New York City on Saturday at 6:00 pro. If you cannot find an exact match, find the one with the closest departure time. Write down the exact departure time of the train you found as well as the total travel time.</Paragraph> <Paragraph position="4"> Task2 (No-Match-l): Try to find a train going to Chicago from Philadelphia on Sunday at 10:30 am. If you cannot find an exact match, find the one with the closest departure time. Write down the exact departure time of the train you found as well as the total travel time.</Paragraph> <Paragraph position="5"> Task3 (No-Match-2): Try to find a train going to Boston from Washington D.C. on Thursday at 3:30 pro. If you cannot find an exact match, find the one between 12:00 pm and 5:00 pm that has the shortest travel time. Write down the exact departure time of the train you found as well as the total travel time.</Paragraph> <Paragraph position="6"> Task4 (Too-Much-Info/Early-Answer): Try to find a train going to Philadelphia from New York City on the weekend at 4:00 pro. If you cannot find an exact match, find the one with the closest departure time. Please write down the exact departure time of the train you found as well as the total travel time. (&quot;weekend&quot; means the train departure sponse to a web query (i.e., when the task was easy). Task 1 (dialogue fragment (4) above) produced a query that resulted in 2 matching trains, one of which was the train requested in the scenario. Since the response strategies of LT and CT were identical under this condition, we predicted identical LT and CT performance, as shown in Table 1.3 Tasks 2 (dialogue fragments (2) and (3)) and 3 led to queries that yielded no matching trains. In Task 2 users were told to find the closest train. Since only CT included this extra information in its response, we predicted that it would perform better than LT.</Paragraph> <Paragraph position="7"> In Task 3 users were told to find the shortest train within a new departure interval. Since neither LT nor CT provided this information initially, we hypothesized comparable LT and CT performance.</Paragraph> <Paragraph position="8"> However, since CT allowed users to change just their departure time while LT required users to construct a whole new query, we also thought it possible that CT might perform slightly better than LT.</Paragraph> <Paragraph position="9"> Task 4 (Figure 1 and dialogue fragment (1)) led to 3Since Task 1 was the easiest, it was always performed first. The order of the remaining tasks was randomized across users. a query where the 3rd of 7 matching trains was the desired answer. Since only LT included this train in its initial response (by luck, due to the train's position in the list of matches), we predicted that LT would perform better than CT. Note that this prediction is highly dependent on the database. If the desired train had been last in the list, we would have predicted that CT would perform better than LT.</Paragraph> <Paragraph position="10"> A second reason for having task scenarios was that it allowed us to objectively determine whether users achieved their tasks. Following PARADISE (Walker et al., 1997), we defined a &quot;key&quot; for each scenario using an attribute value matrix (AVM) task representation, as in Table 2. The key indicates the attribute values that must be exchanged between the agent and user by the end of the dialogue. If the task is successfully completed in a scenario execution (as in Figure 1), the AVM representing the dialogue is identical to the key.</Paragraph> </Section> <Section position="5" start_page="782" end_page="783" type="metho"> <SectionTitle> 4 Measuring Aspects of Performance </SectionTitle> <Paragraph position="0"> Once the experiment was completed, values for a range of evaluation measures were extracted from the resulting data (dialogue recordings, system logs, and web survey responses). Following PARADISE, we organize our measures along four performance dimensions, as shown in Figure 3.</Paragraph> <Paragraph position="1"> To measure task success, we compared the scenario key and scenario execution AVMs for each dialogue, using the Kappa statistic (Walker et al., 1997). For the scenario execution AVM, the values for arrival-city, depart-city, depart-day, and departrange were extracted from system logs of ASR re- null suits. The exact-depart-time and total-travel-time were extracted from the web survey. To measure users' perceptions of task success, the survey also asked users whether they had successfully Completed the task.</Paragraph> <Paragraph position="2"> To measure dialogue quali~ or naturalness, we logged the dialogue manager's behavior on entering and exiting each state in the finite state machine (recall Section 2). We then extracted the number of prompts per dialogue due to Help Requests, ASR Rejections, and Timeouts. Obtaining the values for other quality measures required manual analysis. We listened to the recordings and compared them to the logged ASR results, to calculate concept accuracy (intuitively, semantic interpretation accuracy) for each utterance. This was then used, in combination with ASR rejections, to compute a Mean Recognition score per dialogue. We also listened to the recordings to determine how many times the user interrupted the agent (Barge Ins).</Paragraph> <Paragraph position="3"> To measure dialogue efficiency., the number of System Turns and User Turns were extracted from the dialogue manager log, and the total Elapsed Time was determined from the recording.</Paragraph> <Paragraph position="4"> To measure user satisfaction 4, users responded to the web survey in Figure 4, which assessed their subjective evaluation of the agent's performance.</Paragraph> <Paragraph position="5"> Each question was designed to measure a partic4Questionnaire-based user satisfaction ratings (Shriberg et al., 1992; Polifroni et al., 1992) have been frequently used in the literature as an external indicator of agent usability. * Was the system easy to understand in this conversation? (TTS Performance) * In this conversation, did the system understand what you said? (ASR Performance) * In this conversation, was it easy to find the schedule you wanted? (Task Ease) * Was the pace of interaction with the system appropriate in this conversation? (Interaction Pace) * In this conversation, did you know what you could say at each point of the dialogue? (User Expertise) * How often was the system sluggish and slow to reply to you in this conversation? (System Response) null * Did the system work the way you expected it to in this conversation? (Expected Behavior) * From your current experience with using our sys-</Paragraph> </Section> <Section position="6" start_page="783" end_page="784" type="metho"> <SectionTitle> 5 Strategy and Task Differences </SectionTitle> <Paragraph position="0"> To test the hypotheses in Table 1 we use analysis of variance (ANOVA) (Cohen, 1995) to determine whether the values of any of the evaluation measures in Figure 3 significantly differ as a function of response strategy and task scenario.</Paragraph> <Paragraph position="1"> First, for each task scenario (4 sets of 12 dialogues, 6 per agent and 1 per user), we perform an ANOVA for each evaluation measure as a function of response strategy. For Task 1, there are no significant differences between the 6 LT and 6 CT dialogues for any evaluation measure, which is consistent with Table 1. For Task 2, mean Completed (perceived task success rate) is 50% for LT and 100% for CT (p < .05). In addition, the average number of Help Requests per LT dialogue is 0, while for CT the average is 2.2 (p < .05). Thus, for Task 2, CT has a better perceived task success rate than LT, despite the fact that users needed more help to use CT. Only the perceived task success difference is consistent with the Task 2 prediction in Table 1.5 For Task 3, there are no significant differences between LT and CT, which again matches our predictions. Finally, for Task 4, mean Kappa (actual task success rate) is 100% for LT but only 65% for CT (p < .01). 6 Like Task 2, this result suggests that some type of task success measure is an important predictor of agent performance. Surprisingly, we found that LT and CT did not differ with respect to any efficiency measure, in any task. 7 Next, we combine all of our data (48 dialogues), and perform a two-way ANOVA for each evaluation measure as a function of strategy and task. An interaction between response strategy and task scenario is significant for Future Use (p < .03). For task 1, the likelihood of Future Use is the same for LT and CT; for task 2, the likelihood is higher for CT; for tasks 3 and 4, the likelihood is higher for LT. Thus, the results for tasks 1, 2, and 4, but not for Task 3, are consistent with the predictions in Table 1. However, Task 3 was the most difficult task (see below), and sometimes led to unexpected user behavior with both agents. A strategy/task interaction is also significant for Help Requests (p < .02). For tasks 1 and 3, the number of requests is higher for LT; for tasks 2 and 4, the number is higher for CT.</Paragraph> <Paragraph position="2"> No evaluation measures significantly differ as a function of response strategy, which is consistent with Table 1. Since the task scenarios were constructed to yield comparable performance in Tasks 1 and 3, better CT performance in Task 2, and better LT performance in Task 4, we expected that overall, LT and CT performance would be comparable.</Paragraph> <Paragraph position="3"> In contrast, many measures (User Satisfaction, Elapsed Time, System Turns, User Turns, ASR Performance, and Task Ease) differ as a function of task scenario (p < .03), confirming that our tasks vary with respect to difficulty. Our results suggest that the ordering of the tasks from easiest to most difficult is 1, 4, 2, and 3, 8 which is consistent with our predictions. Recall that for Task 1, the initial query was designed to yield the correct train for both LT and CT. For tasks 4 and 2, the initial query was designed to yield the correct train for only one agent, and to require a follow-up query for the other.</Paragraph> <Paragraph position="4"> SHowever, the analysis in Section 6 suggests that Help Requests is not a good predictor of performance.</Paragraph> <Paragraph position="5"> 6In our data, actual task success implies perceived task success, but not vice-versa.</Paragraph> <Paragraph position="6"> 7However, our &quot;'difficult&quot; tasks were not that difficult (we wanted to minimize subjects' time commitment).</Paragraph> <Paragraph position="7"> SThis ordering is observed for all the listed measures except User Turns, which reverses tasks 4 and 1.</Paragraph> <Paragraph position="8"> For Task 3, the initial query was designed to require a follow-up query for both agents.</Paragraph> </Section> <Section position="7" start_page="784" end_page="784" type="metho"> <SectionTitle> 6 Performance Function Estimation </SectionTitle> <Paragraph position="0"> While hypothesis testing tells us how each evaluation measure differs as a function of strategy and/or task, it does not tell us how to tradeoff or combine results from multiple measures. Understanding such tradeoffs is especially important when different measures yield different performance predictions (e.g., recall the Task 2 hypothesis testing results for Completed and Help Requests).</Paragraph> </Section> <Section position="8" start_page="784" end_page="785" type="metho"> <SectionTitle> MAXIMIZE USER SATISFACTION I l MAXIMIZE TASK SUCCESS \[ MINIMIZE COSTS I QUALITATIVI~ EFFICIENCY MEASURES I MEASURES </SectionTitle> <Paragraph position="0"> spoken dialogue performance.</Paragraph> <Paragraph position="1"> * To assess the relative contribution of each evaluation measure to performance, we use PARADISE (Walker et al., 1997) to derive a perfo rmance function from our data. PARADISE draws on ideas in multi-attribute decision theory (Keeney and Raiffa, 1976) to posit the model shown in Figure 5, then uses multivariate linear regression to estimate a quantitative performance function based on this model. Linear regression produces coefficients describing the relative contribution of predictor factors in accounting for the variance in a predicted factor. In PARADISE, the success and cost measures are predictors, while user satisfaction is predicted.</Paragraph> <Paragraph position="2"> Figure 3 showed how the measures used to evaluate TOOT instantiate the PARADISE model.</Paragraph> <Paragraph position="3"> The application of PARADISE to the TOOT data shows that the only significant contributors to User Satisfaction are Completed (Comp), Mean Recognition (MR) and Barge Ins (BI), and yields the following performance function:</Paragraph> <Paragraph position="5"> Completed is significant at p < .0002, Mean Recognition 9 at p < .003, and Barge Ins at p < .0004; these account for 47% of the variance in User Satisfaction..V is a Z score normalization function (Cohen, 1995) and guarantees that the coeffi9Since we measure recognition rather than misrecognition, this &quot;cost&quot; factor has a positive coefficient.</Paragraph> <Paragraph position="6"> cients directly indicate the relative contribution of each factor to performance.</Paragraph> <Paragraph position="7"> Our performance function demonstrates that TOOT performance involves task success and dialogue quality factors. Analysis of variance suggested that task success was a likely performance factor. PARADISE confirms this hypothesis, and demonstrates that perceived rather than actual task success is the useful predictor. While 39 dialogues were perceived to have been successful, only 27 were actually successful.</Paragraph> <Paragraph position="8"> Results that were not apparent from the analysis of variance are that Mean Recognition and Barge Ins are also predictors of performance. The mean recognition for our corpus is 85%. Apparently, users of both LT and CT are bothered by dialogue phenomena associated with poor recognition. For example, system misunderstandings (which result from ASR misrecognitions) and system requests to repeat what users have said (which result from ASR rejections) both make dialogues seem less natural.</Paragraph> <Paragraph position="9"> While barge-in is usually considered an advanced (and desirable) ASR capability, our performance function suggests that in TOOT, allowing users to interrupt actually degrades performance. Examination of our transcripts shows that users sometimes use barge-in to shorten TOOT's prompts. This often circumvents TOOT's confirmation strategy, which incorporates speech recognition results into prompts to make the user aware of misrecognitions.</Paragraph> <Paragraph position="10"> Surprisingly, no efficiency measures are significant predictors of performance. This draws into question the frequently made assumption that efficiency is one of the most important measures of system performance, and instead suggests that users are more attuned to both task success and qualitative aspects of the dialogue, or that efficiency is highly correlated with some of these factors.</Paragraph> <Paragraph position="11"> However, analysis of subsets of our data suggests that efficiency measures can become important performance predictors when the more primary effects are factored out. For example, when a regression is performed on the 11 TOOT dialogues with perfect Mean Recognition, the significant contributors to performance become Completed (p < .05), Elapsed time (p < .04), User Turns (p < .03) and Barge Ins (p < 0.0007) (accounting for 87% of the variance). Thus, in the presence of perfect ASR, efficiency becomes important. When a regression is performed using the 39 dialogues where users thought they had successfully completed the task (perfect Completed), the significant factors become Elapsed time (p < .002), Timeouts (p < .002), and Barge Ins (p < .02) (58% of the variance).</Paragraph> <Paragraph position="12"> Applying the performance function to each of our 48 dialogues yields a performance estimate for each dialogue. Analysis with these estimates shows no significant differences for mean LT and CT performance. This result is consistent with the ANOVA result, where only one of the three (comparably weighted) factors in the performance function depends on response strategy (Completed). Note that for Tasks 2 and 4, the predictions in Table 1 do not hold for overall performance, despite the ANOVA results that the predictions do hold for some evaluation measures (e.g., Completed in Task 2).</Paragraph> </Section> class="xml-element"></Paper>