File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/02/p02-1049_metho.xml

Size: 21,876 bytes

Last Modified: 2025-10-06 14:07:56

<?xml version="1.0" standalone="yes"?>
<Paper uid="P02-1049">
  <Title>What's the Trouble: Automatically Identifying Problematic Dialogues in DARPA Communicator Dialogue Systems</Title>
  <Section position="4" start_page="0" end_page="0" type="metho">
    <SectionTitle>
CLASSIFICATION ACCURACY, while REGRESSION
</SectionTitle>
    <Paragraph position="0"> trees derive a set of queries to maximize the CORRELATION of the predicted value and the original value. Like other machine learners, CART takes as input the allowed values for the response variables; the names and ranges of values of a xed set of input features; and training data specifying the response variable value and the input feature values for each example in a training set. Below, we specify how the PDI was trained, rst describing the corpus, then the response variables, and nally the input features derived from the corpus.</Paragraph>
    <Paragraph position="1"> Corpus: We train and test the PDI on the DARPA Communicator October-2001 corpus of 1242 dialogues. This corpus represents interactions with real users, with eight different Communicator travel planning systems, over a period of six months from April to October of 2001. The dialogue tasks range from simple domestic round trips to multileg international trips requiring both car and hotel arrangements. The corpus includes log les with logged events for each system and user turn; hand transcriptions and automatic speech recognizer (ASR) transcription for each user utterance; information derived from a user pro le such as user dialect region; and a User Satisfaction survey and hand-labelled Task Completion metric for each dialogue. We randomly divide the corpus into 80% training (894 dialogues) and 20% testing (248 dialogues).</Paragraph>
    <Paragraph position="2"> De ning the Response Variables: In principle, either low User Satisfaction or failure to complete the task could be used to de ne problematic dialogues. Therefore, both of these are candidate response variables to be examined. The User Satisfaction measure derived from the user survey ranges between 5 and 25. Task Completion is a ternary measure where no Task Completion is indicated by 0, completion of only the airline itinerary is indicated by 1, and completion of both the airline itinerary and ground arrangements, such as car and hotel bookings, is indicated by 2. We also de ned a binary version of Task Completion, where Binary Task Completion=0 when no task or subtask was complete (equivalent to Task Completion=0), and Binary Task Completion=1 where all or some of the task was complete (equivalent to Task Completion=1 or Task Completion=2).</Paragraph>
    <Paragraph position="3"> Figure 1 shows the frequency of dialogues for varying User Satisfaction for cases where Task Completion is 0 (solid line) and Task Completion is greater than 0 (dotted lines). Note that Task Completion is 1 or 2 for a number of dialogues for which User Satisfaction is low. Figure 2 illustrates such a dialogue (system turns are labelled S, user turns as U, and ASR hypotheses as REC). Here, low User Satisfaction may be due to the fact that the user had to repeat herself many times before the system understood the departure city. An automatic surrogate for ASR accuracy (such as ASR con dence) would  Completion is 0, 1 or 2 not be adequate for identifying this problematic dialogue, because here either the dialogue manager or the SLU component is at fault. Another dialogue subset of interest in Figure 1 is one for which Task Completion is 0 but User Satisfaction is high. A common cause for non-Task Completion in these dialogues is database access problems. The fact that the interaction went well until this point is not captured by the Task Completion metric. Thus we decided to use both User Satisfaction and Task Completion as response variables in separate experiments in order to allow the PDI to capture the relationship between these two variables.</Paragraph>
    <Paragraph position="4"> Input Features: Both User Satisfaction and Task Completion predictors are trained using two types of automatically extractable features: (1) log le features; and (2) system dialogue act types. The log le features include ef ciency metrics such as the number of system and user turns spent on the task; the total time of the dialogue in milliseconds; and situational metrics, such as phone type1 which affects ASR. The system dialogue act types are extracted from the log les using the DATE (Dialogue Act Tagging for Evaluation) automatic tag1We assume this is automatically derivable by automatic number identi cation (ANI).</Paragraph>
    <Paragraph position="5"> S13: Sorry, I must have misunderstood. I just heard you say that the departure city was Minto, Arkansas.</Paragraph>
    <Paragraph position="6"> Was the departure city Minneapolis-St-Paul, Minto, Arkansas or Milan, Italy?</Paragraph>
  </Section>
  <Section position="5" start_page="0" end_page="0" type="metho">
    <SectionTitle>
U14: THE DEPARTURE CITY IS MINNEAPOLIS
SAINT PAUL
</SectionTitle>
    <Paragraph position="0"> REC: the departure city in minneapolis saint paul S15: Here's a summary of the second ight segment: I've got you leaving from Minto, Arkansas on Friday, October twelfth in the morning and arriving in newark ying on continental. Is this correct? U16: NO I WANT TO LEAVE FROM MINNEAPOLIS REC: no i want to leave from minneapolis Repeat dialogue segment 3 times S25: Sorry, I must have misunderstood. I just heard you say that the departure city was Minto, Arkansas. Was the departure city Minto, Arkansas or Minneapolis-St-Paul ?</Paragraph>
  </Section>
  <Section position="6" start_page="0" end_page="0" type="metho">
    <SectionTitle>
U26: THE DEPARTURE CITY IS MINNEAPOLIS
SAINT PAUL
</SectionTitle>
    <Paragraph position="0"> REC: the departure city minneapolis by all S27: Recapping the second ight segment: I have you leaving out of minneapolis-st-paul on Friday, October twelfth in the morning and arriving in newark ying on continental. Is this correct?  ging scheme (Walker et al., 2001). The purpose of these features is to extract numerical correlates of system dialogue behaviors. This dialogue act labelling procedure is detailed in Section 3.</Paragraph>
    <Paragraph position="1"> Figure 3 summarizes the types of features used to train the User Satisfaction predictor. In addition to the ef ciency metrics and the DATE labels, Task Success can itself be used as a predictor. This can either be the hand-labelled feature or an approximation as predicted by the Task Completion Predictor, described in Section 4. Figure 4 shows the system design for automatically predicting User Satisfac-</Paragraph>
  </Section>
  <Section position="7" start_page="0" end_page="0" type="metho">
    <SectionTitle>
3 Extracting DATE Features
</SectionTitle>
    <Paragraph position="0"> The dialogue act labelling of the corpus follows the DATE tagging scheme (Walker et al., 2001).</Paragraph>
    <Paragraph position="1"> In DATE, utterance classi cation is done along three cross-cutting orthogonal dimensions. The CONVERSATIONAL-DOMAIN dimension speci es the domain of discourse that an utterance is about.</Paragraph>
    <Paragraph position="2"> The SPEECH ACT dimension captures distinctions between communicative goals such as requesting information (REQUEST-INFO) or presenting information (PRESENT-INFO). The TASK-SUBTASK dimension speci es which travel reservation subtask the utterance contributes to. The SPEECH ACT and CONVERSATIONAL-DOMAIN dimensions are general across domains, while the TASK-SUBTASK dimension is domain- and sometimes system-speci c.</Paragraph>
    <Paragraph position="3"> Within the conversational domain dimension, DATE distinguishes three domains (see Figure 5).</Paragraph>
    <Paragraph position="4"> The ABOUT-TASK domain is necessary for evaluating a dialogue system's ability to collaborate with a speaker on achieving the task goal. The ABOUT-COMMUNICATION domain re ects the system goal of managing the verbal channel of communication and providing evidence of what has been understood. All implicit and explicit con rmations are about communication. The ABOUT-SITUATION-FRAME domain pertains to the goal of managing the user's expectations about how to interact with the system.</Paragraph>
    <Paragraph position="5"> DATE distinguishes 11 speech acts. Examples of each speech act are shown in Figure 6.</Paragraph>
    <Paragraph position="6"> The TASK-SUBTASK dimension distinguishes among 28 subtasks, some of which can also be grouped at a level below the top level task. The TOP-LEVEL-TRIP task describes the task which contains as its subtasks the ORIGIN, DESTINATION,  both the HOTEL and CAR-RENTAL subtasks. The HOTEL task includes both the HOTEL-NAME and HOTEL-LOCATION subtasks.2 For the DATE labelling of the corpus, we implemented an extended version of the pattern matcher that was used for tagging the Communicator June 2000 corpus (Walker et al., 2001). This method identi ed and labelled an utterance or utterance sequence automatically by reference to a database of utterance patterns that were hand-labelled with the DATE tags. Before applying the pattern matcher, a named-entity labeler was applied to the system utterances, matching named-entities relevant in the travel domain, such as city, airport, car, hotel, airline names etc.. The named-entity labeler was also applied to the utterance patterns in the pattern database to allow for generality in the expression of communicative goals speci ed within DATE. For this named-entity labelling task, we collected vocabulary lists from the sites, which maintained such lists for 2ABOUT-SITUATION-FRAME utterances are not speci c to any particular task and can be used for any subtask, for example, system statements that it misunderstood. Such utterances are given a meta dialogue act status in the task dimension. developing their system.3 The extension of the pattern matcher for the 2001 corpus labelling was done because we found that systems had augmented their inventory of named entities and utterance patterns from 2000 to 2001, and these were not accounted for by the 2000 tagger database. For the extension, we collected a fresh set of vocabulary lists from the sites and augmented the pattern database with additional 800 labelled utterance patterns. We also implemented a contextual rule-based postprocessor that takes any remaining unlabelled utterances and attempts to label them by looking at their surrounding DATE labels. More details about the extended tagger can be found in (Prasad and Walker, 2002).</Paragraph>
    <Paragraph position="7"> On the 2001 corpus, we were able to label 98.4a1 of the data. A hand evaluation of 10 randomly selected dialogues from each system shows that we achieved a classi cation accuracy of 96a1 at the utterance level.</Paragraph>
    <Paragraph position="8"> For User Satisfaction Prediction, we found that the distribution of DATE acts were better captured by using the frequency normalized over the total number of dialogue acts. In addition to these unigram proportions, the bigram frequencies of the DATE dialogue acts were also calculated. In the following two sections, we discuss which DATE labels are discriminatory for predicting Task Completion and User Satisfaction.</Paragraph>
  </Section>
  <Section position="8" start_page="0" end_page="0" type="metho">
    <SectionTitle>
4 The Task Completion Predictor
</SectionTitle>
    <Paragraph position="0"> In order to automatically predict Task Completion, we train a CLASSIFICATION tree to categorize dialogues into Task Completion=0, Task Completion=1 or Task Completion=2. Recall that a CLASSIFICATION tree attempts to maximize CLASSIFICATION ACCURACY, results for Task Completion are thus given in terms of percentage of dialogues correctly classi ed. The majority class base-line is 59.3% (dialogues where Task Completion=1).</Paragraph>
    <Paragraph position="1"> The tree was trained on a number of different input features. The most discriminatory ones, however, were derived from the DATE tagger. We use the primitive DATE tags in conjunction with a feature called GroundCheck (GC), a boolean feature indicating the existence of DATE tags related to making ground arrangements, speci cally request info:hotel name, request info:hotel location, offer:hotel and offer:rental.</Paragraph>
    <Paragraph position="2"> Table 1 gives the results for Task Completion pre- null Completion (BTC) prediction results, using automatic log le features (ALF), GroundCheck (GC) and DATE unigram frequencies The rst row is for predicting ternary Task Completion, and the second for predicting binary Task Completion. Using automatic log le features (ALF) is not effective for the prediction of either types of Task Completion. However, the use of GroundCheck results in an accuracy of 79% for the ternary Task Completion which is signi cantly above the base-line (df = 247, t = -6.264, p a2 .0001). Adding in the other DATE features yields an accuracy of 85%. For Binary Task Completion it is only the use of all the DATE features that yields an improvement over the baseline of 92%, which is signi cant (df = 247, t = 5.83, p a2 .0001).</Paragraph>
    <Paragraph position="3"> A diagram of the trained decision tree for ternary Task Completion is given in Figure 7. At any junction in the tree, if the query is true then one takes the path down the right-hand side of the tree, otherwise one takes the left-hand side. The leaf nodes contain the predicted value. The GroundCheck feature is at the top of the tree and divides the data into Task Completiona2 2 and Task Completiona3 2.</Paragraph>
    <Paragraph position="4"> If GroundChecka3 1, then the tree estimates that Task Completion is 2, which is the best t for the data given the input features. If GroundChecka3 0 and there is an acknowledgment of a booking, then probably a ight has been booked, therefore, Task Completion is predicted to be 1. Interestingly, if there is no acknowledgment of a booking then Task Completiona3 0, unless the system got to the stage of asking the user for an airline preference and if request info:top level tripa2 2. More than one of these DATE types indicates that there was a problem in the dialogue and that the information gathering phase started over from the beginning.</Paragraph>
    <Paragraph position="5"> The binary Task Completion decision tree simply checks if an acknowledgement: ight booking has occurred. If it has, then Binary Task Completion=1, otherwise it looks for the DATE act about situation frame:instruction:meta situation info, which captures the fact that the system has told the user what the system can and cannot do, or has informed the user about the current state of the task. This must help with Task Completion, as the tree tells us that if one or more of these acts are observed then Task Completion=1, otherwise Task  tures (LF), adding unigram proportions and bigram counts, for trees tested on either hand-labelled (HL) or automatically derived Task Completion (TC) and Binary Task Completion (BTC) Quantitative Results: Recall that REGRESSION trees attempt to maximize the CORRELATION of the predicted value and the original value. Thus, the results of the User Satisfaction predictor are given in terms of the correlation between the predicted User Satisfaction and actual User Satisfaction as calculated from the user survey. Here, we also provide Ra4 for comparison with previous studies. Table 2 gives the correlation results for User Satisfaction for different feature sets. The User Satisfaction predictor is trained using the hand-labelled Task Completion feature for a topline result and using the automatically obtained Task Completion (Auto TC) for the fully automatic results. We also give results using Binary Task Completion (BTC) as a substitute for Task Completion. The rst column gives results using features extracted from the log le; the second column indicates results using the DATE unigram proportions and the third column indicates results when both the DATE unigram and bigram features are available.</Paragraph>
    <Paragraph position="6"> The rst row of Table 2 indicates that performance across the three feature sets is indistinguishable when hand-labelled Task Completion (HL TC) is used as the Task Completion input feature. A comparison of Row 1 and Row 2 shows that the PDI performs signi cantly worse using only automatic features (z = 3.18). Row 2 also indicates that the DATE bigrams help performance, although the difference between R = .438 and R = .472 is not signi cant. The third and fourth rows of Table 1 indicate that for predicting User Satisfaction, Binary Task Completion is as good as or better than Ternary Task Completion. The highest correlation of 0.614 (a5 a4 a3a7a6a9a8a11a10a12a10 ) uses hand-labelled Binary Task Completion and the log le features and DATE uni-gram proportions and bigram counts. Again, we see that the Automatic Binary Task Completion (Auto BTC) performs signi cantly worse than the hand-labelled version (z = -3.18). Row 4 includes the best totally automatic system: using Automatic Binary Task Completion and DATE unigrams and bigrams yields a correlation of 0.484 (a5a13a4a14a3a15a6a9a16a12a8 ). Regression Tree Interpretation: It is interesting to examine the trees to see which features are used for predicting User Satisfaction. A metric called Feature Usage Frequency indicates which features are the most discriminatory in the CART tree. Speci cally, Feature Usage Frequency counts how often a feature is queried for each data point, normalized so that the sum of Feature Usage Frequency values for all the features sums to one. The higher a feature is in the tree, the more times it is queried. To calculate the Feature Usage Frequency, we grouped the features into three types: Task Completion, Logle features and DATE frequencies. Feature Usage Frequency for the log le features is 37%. Task Completion occurs only twice in the tree, however, it makes up 31because it occurs at the top of the tree. The Feature Usage Frequency for DATE category frequency is 32%. We will discuss each of these three groups of features in turn.</Paragraph>
    <Paragraph position="7"> The most used log le feature is TurnsOnTask which is the number of turns which are taskoriented, for example, initial instructions on how to use the system are not taken as a TurnOnTask.</Paragraph>
    <Paragraph position="8"> Shorter dialogues tend to have a higher User Satisfaction. This is re ected in the User Satisfaction scores in the tree. However, dialogues which are long (TurnsOnTask a17 79 ) can be satisfactory (User Satisfaction = 15.2) as long as the task that is completed is long, i.e., if ground arrangements are made in that dialogue (Task Completion=2). If ground arrangements are not made, the User Satisfaction is lower (11.6). Phone type is another important feature queried in the tree, so that dialogues conducted over corded phones have higher satisfaction. This is likely to be due to better recognition performance from corded phones.</Paragraph>
    <Paragraph position="9"> As mentioned previously, Task Completion is at the top of the tree and is therefore the most queried feature. This captures the relationship between Task Completion and User Satisfaction as illustrated in  Finally, it is interesting to examine which DATE tags the tree uses. If there have been more than three acknowledgments of bookings, then several legs of a journey have been successfully booked, therefore User Satisfaction is high. In particular, User Satisfaction is high if the system has asked if the user would like a price for their itinerary which is one of the nal dialogue acts a system does before the task is completed. The DATE act about comm:apology:meta slu reject is a measure of the system's level of misunderstanding. Therefore, the more of these dialogue act types the lower User Satisfaction. This part of the tree uses length in a similar way described earlier, whereby long dialogues are only allocated lower User Satisfaction if they do not involve ground arrangements. Users do not seem to mind longer dialogues as long as the system gives a number of implicit con rmations. The dialogue act request info:top level trip usually occurs at the start of the dialogue and requests the initial travel plan. If there are more than one of this dialogue act, it indicates that a START-OVER occurred due to system failure, and this leads to lower User Satisfaction. A rule containing the bigram request info:depart day month date+USER states that if there is more than one occurrence of this request then User Satisfaction will be lower. USER is the single category used for user-turns. No automatic method of predicting user speech act is available yet for this data. A repetition of this DATE bigram indicates that a misunderstanding occurred the rst time it was requested, or that the task is multi-leg in which case User Satisfaction is generally lower.</Paragraph>
    <Paragraph position="10"> The tree that uses Binary Task Completion is identical to the tree described above, apart from one binary decision which differentiates dialogues where Task Completion=1 and Task Completion=2.</Paragraph>
    <Paragraph position="11"> Instead of making this distinction, it just uses dialogue length to indicate the complexity of the task. In the original tree, long dialogues are not penalized if they have achieved a complex task (i.e. if Task Completion=2). The Binary Task Completion tree has no way of making this distinction and therefore just penalizes very long dialogues (where TurnsOnTask a17 110). The Feature Usage Frequency for the Task Completion features is reduced from 31% to 21%, and the Feature Usage Frequency for the logle features increases to 47%. We have shown that this more general tree produces slightly better results. null</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML