File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/87/p87-1030_metho.xml
Size: 19,317 bytes
Last Modified: 2025-10-06 14:12:01
<?xml version="1.0" standalone="yes"?> <Paper uid="P87-1030"> <Title>A Model For Generating Better Explanations</Title> <Section position="3" start_page="215" end_page="215" type="metho"> <SectionTitle> 2. The User Model </SectionTitle> <Paragraph position="0"> Our model requires a database of domain dependent plans and goals. We assume that the goals of the user in the immediate discourse are available by methods such as specified in (Allen 1983; Carberry 1983; Litman and Allen 1984; Pollack 1984, 1986). The model of a user contains, in addition to the user's immediate discourse goals, fiis background, higher domain goals, and plans specifying how the higher domain goals will be accomplished. In the student-advisor domain, for example, the user model will initially contain some default goals that the user can be expected to hold, such as avoiding failing marks on his permanent record. It will also contain those goals of the user that can be inferred or known from the system's knowledge of the user's background, such as the attainment of a degree.&quot; New goals and plans will be added to the model (e.g.</Paragraph> <Paragraph position="1"> the student's preferences or intentions) as they are derived from the discourse. For example, if the user displays or mentions a predilection for numerical analysis courses this would be installed in the user model as a goal to be achieved.</Paragraph> </Section> <Section position="4" start_page="215" end_page="216" type="metho"> <SectionTitle> 3. The Algorithm </SectionTitle> <Paragraph position="0"> Explanations and predictions of people's choices in everyday life are often founded on the assumption of human rationality. Allen's (1983) work in recognizing intentions from natural language utterances makes the assumption that &quot;people are rational agents who are' capable of forming and executing plans to achieve their goals&quot; (see also Cohen and Levesque 1985). Our algo.'-ithm reasons about the user's goals and plans according to some postulated guiding principles of action to which a reasonable agent will try to adhere in deciding between competing goals and methods for achieving those goals. If the user does not &quot;live up&quot; to these principles, the response generated by the algorithm will include how the principles are violated and also some alternatives that are better (if they exist) because they do not violate the principles. Some of these principles will be made explicit in the following description of the algorithm (see van Beek 1986 for a more complete description).</Paragraph> <Paragraph position="1"> The algorithm begins by checking whether the user's query (e.g. &quot;Can I enroll in CS 375?&quot;) is possible or not possible (refer to figure 1). If the query is not possible, the user is informed and the explanation includes the reasons for the failure (step 1.0 of algorithm). Alternative plans that are possible and help achieve the user's intended goal are searched for and presented to the user. But before presenting any alternative, the algorithm, to not mislead the user, ensures that the alternafive is compatible with the higher domain goals of the user (step 1.1).</Paragraph> <Paragraph position="2"> If the query is possible, control passes to step 2.0, where the next step is to determine whether the stated goal does, as the user believes, help achieve the intended goal. Given that the user presents a plan that he believes will accomplish his intended goals, the system must check if the plan succeeds in its intentions (step 2.1 of algorithm). As is shown in the algorithm, if the relationship does not hold or the plan is not executable, the user should be informed. Here it is possible to provide additional unrequested information necessary to achieve the goal (cf. Allen 1983).</Paragraph> <Paragraph position="3"> In planning a response, the system should ensure that the current goals, as expressed in the user's queries, are compatible with the user's higher domain goals (step 2.2 in algorithm). For example, a plan that leads to the attainment of one goal may cause the non-attainment of another such as when a previously formed plan becomes invalid or a subgoal becomes impossible to achieve. A user may expect to be informed of such consequences, particularly if the goal that cannot now be attained is a goal the user values highly.</Paragraph> <Paragraph position="4"> The system can be additionally cooperative by suggesting better alternatives if they exist (step 2.3 in algorithm). Furthermore, both the definitions of better and possible alternatives are relative to a particular user. In particular, if a user has several compatible goals, he should adopt the plan that will contribute to the greatest number of his goals. As well, those goals that are valued absolutely higher than other goals, are the goals to be achieved. A user should seek plans of action that will satisfy those goals, and plans to satisfy his other goals should be adopted only if they are compatible with the satisfaction of those goals he values most highly.</Paragraph> <Paragraph position="5"> Message: No, \[query\] is not possible because ...</Paragraph> <Paragraph position="6"> If ( 3 alternatives that help achieve the intended goal and are compatible with the higher domain goals ) then Message: However, you can \[alternatives\]</Paragraph> </Section> <Section position="5" start_page="216" end_page="218" type="metho"> <SectionTitle> 4. An Example </SectionTitle> <Paragraph position="0"> Until now we have discussed a model for generating better, user-specific explanations. A test version of this model has been implemented in a student-advisor domain using Waterloo UNIX Prolog. Below we present an example to illustrate how the algorithm and the model of the user work together to produce these responses and to illustrate some of the details of the implementation.</Paragraph> <Paragraph position="1"> Given a query by the user, the system determines whether the stated goal of the query is possible or not possible and whether the stated goal will help achieve the intended goal. In the hypothetical situation shown in figure 2, the stated goal of enrolling in CS572 is possible and the intended goal of taking a numerical analysis course is satisfied 1. The system then considers the background of the user (e.g. the courses taken), the background of the domain (e.g. what courses are offered) and a query from the user (e.g, &quot;Can I enroll in CS572?&quot;), and ensures that the goal of the query is compatible with the attainment of the overall domain goal. In this example, the user's stated goal of enrolling in a particular course is incompatible with the user's higher I Recall that we are assuming the stated and intended goals are supplied to our model. This particular intended goal, hypothetically inferred from the stated goal and previous discourse, was chosen to illustrate the use of the stated, intended, and domain goals in forming a best response. Tile case of a conflict between stated and intended goal would be handled in a similar fashion to the conflict be~'een stated and domain goal, shown in this example.</Paragraph> <Paragraph position="2"> Scenario: The user asks about enrolling in a 500 level course.</Paragraph> <Paragraph position="3"> Only a certain number of 500 level courses can be credited towards a degree and the user has already taken that number of 500 level courses.</Paragraph> <Paragraph position="4"> Stated goal: Intended goal: Domain goal: Enroll in the course.</Paragraph> <Paragraph position="5"> Take a numerical analysis course.</Paragraph> <Paragraph position="6"> Get a degree.</Paragraph> <Paragraph position="7"> User: Can I enroll in CS 572 (Linear Algebra)? System: Yes, but it will not get you further towards your degree since you have already met your 500 level requirement. Some useful courses would be CS 673 domain goal of achieving a degree because several preconditions fail. That is, given the background of the user the goal of the query to enroll in CS572 will not help achieve the domain goal. Knowledge of the incompatibility and the failed preconditions are used to form * the first sentence of the system's response.</Paragraph> <Paragraph position="8"> To suggest better alternatives, the system goes into a planning stage. There is stored in the system a general plan for accomplishing the higher domain goal of the user. This plan is necessarily incomplete and is used by the system to track the user by instantiating the plan according to the user's particular case. The system considers alternative plans to achieve the user's intended goal that are compatible with the domain goal. For this particular example, the system discovers other courses the user can add that will help achieve the higher goal. To actually generate better alternatives and to check whether the user's stated goal is compatible with the user's domain goal, a module of the implemented system is a Horn clause theorem prover, built on top of Waterloo Unix Prolog, with the feature that it records a history of the deduction. The theorem prover generates possible alternative plans by performing deduction on the goal at the level of the user's query. That is, the goal is &quot;proven&quot; given the &quot;actions&quot; (e.g. enroll in a course) and the &quot;constraints&quot; (e.g. prerequisites of the course were taken) of the domain. In the example of figure 2, the expert system has the following Horn clauses in its knowledge base: course (cs673, numerical) course (cs674. numerical) Figure 3 shows a portion of the simplified domain plan for getting a degree. Consider the first clause of the counts_.for_credit predicate. This clause states that a course will count for credit if it is a 500 level course and fewer than two 500 level course have already been counted for credit (since in our hypothetical world, at most two 500 level courses can be counted for credit towards a degree). The second clause is similar. It states the conditions under which a a 600 level course can be counted for credit.</Paragraph> <Paragraph position="9"> get_degree(Student, Action) <receive_credit(Student, Course, Action); getdegree(Student, \[\]); receive credit (Student, Course, Action) <counts_for_credit (Student, Course), enrolled (Student, Course, credit, Action), dowork (Student, Course), passing_grade (Student, Course); receive_credit (Student, Course, Action) <enrolled (Student, Course, credit, \[\]), enrolled (Student, Course, incomplete, Action), complete_work (Student, Course), passing_grade (Student, Course); counts_for_credit (Student, Course) <is_500_level (Course).</Paragraph> <Paragraph position="10"> 500_level_taken (Student, N), It (N, 2); counts for credit (Student, Course) <is_600_level (Course).</Paragraph> <Paragraph position="11"> 600_level_taken (Student, N), It (N, 5); The domain plan is then employed to generate an appropriate response. The clauses can be used in two ways: (i) to return an action that will help achieve a goal and (ii) to check whether a particular action is a possible step in a plan to achieve a goal. In the first use, the Action parameter is uninstantiated (a variable), the theorem prover is applied to the clause, and, as a result, the Action parameter is instantiated with an action the user could perform towards achieving his goal. In the second case, the Action parameter is bound to a particular action and then the theorem prover is applied. If the proof succeeds, the particular action is a valid step in a plan; if the proof fails, it is not valid and the history of the deduction will show why. In this example, enrolling in CS673 is a valid step in a plan for achieving a degree.</Paragraph> <Paragraph position="12"> Recall that the system will generate alternative plans even if the user's query is a valid plan in an attempt to find a better solution for the user. The (possibly) multiple alternative plans are then potential candidates for presenting to the user. These candidates are pruned by ranking them according to the heuristic of &quot;which plan would get the user further towards his goals&quot;. Thus, the better alternatives are the ones that help satisfy multiple goals or multiple subgoals 2. One way in which the system can reduce alternatives is to employ previously derived goals of the user such as those that indicate certain preferences or interests. In the course domain, for instance, the user may prefer taking numerical analysis courses. For the example in figure 2, the suggested alternatives of CS673 and CS674 help towards the user's goal of getting a degree and the user's goal of taking numerical analysis courses and so are preferable 3.</Paragraph> </Section> <Section position="6" start_page="218" end_page="219" type="metho"> <SectionTitle> 5. Joshi Revisited </SectionTitle> <Paragraph position="0"> The discussion in the previous section showed how our model can recognize when a user's plan is incompatible with his domain goals and present better alternative plans that are user-specific. Here we present examples of how our model can generate the responses enumerated by Joshi. The examples further illustrate how the addition of the user's overall goals allows us to compare and select better alternatives to a user's plan.</Paragraph> <Paragraph position="1"> Figure 4 shows two different responses to the same question: &quot;Can I drop CS 577?&quot; The student asking the question is doing poorly in the course and wishes to drop it to avoid failing it. The goals of the query are passed to the Prolog implementation and the response generated depends on these goals, the information in the model of the user, and on external conditions such as deadlines for changing status in a course. For example purposes, the domain information is read in from a file (e.g. consult(example_l)). Figure 3 shows the clausal representation of the domain goals and plans used in this example (the representations for the goal of avoiding a failing mark are not shown but are similar).</Paragraph> <Paragraph position="2"> 2 Part of our purpose is to characterize domain independcnt criteria for &quot;bettemess&quot;. Domain dependent knowledge could also be used to further reduce the alternatives displayed to the user. For example, in the course domain a rule of the form: &quot;A mandatory, course is preferable to a non-mandatory course&quot;, may help eliminate presentation of certain options. 3 Note that in this example the user's intended goal also indicates a preference. Other user preferences may have been previously specificed: these would be used to influence the response in a similar faslfion.</Paragraph> <Paragraph position="3"> ? query(changestatus(ariadne, 577, credit, nil), not fail(ariadne, 577, Action)); Yes, change_status(ariadne, 577, credit, nil) is possible.</Paragraph> <Paragraph position="4"> But, not fail(ariadne, 577, _461) is not achieved since...</Paragraph> <Paragraph position="5"> is_failing(ariadne, 577) However, you can ...</Paragraph> <Paragraph position="6"> change_status(ariadne, 577, credit, incomplete) This will also help towards receive_credit query(changestatus(andrew, 577, credit, nil), not_fail(andrew, 577, Action)); Yes, changestatus(andrew, 577, credit, nil) is possible. But, there is a better way ...</Paragraph> <Paragraph position="7"> change_status(andrew, 577, credit, incomplete) Because this will also help towards receive_credit but it fails in its intention (dropping the course doesn't enable the student to avoid failing the course). This is case 2.1 of the algorithm. The system now looks for alternatives that will help achieve the student's intended goal and determines that two alternative plans are possible: the student could either change to audit status or take an incomplete in the course. The plan to take an incomplete is presented to the user because it is considered the best of the two alternatives; it will allow the student to still achieve another of his goals: receiving credit for the course.</Paragraph> <Paragraph position="8"> Example 2: Here the query is possible (the student can drop the course) and is successful in its intention (dropping the course does enable the student to avoid failing the course). The system now looks for a better alternative to the student's plan of dropping the course (case</Paragraph> <Section position="1" start_page="218" end_page="219" type="sub_section"> <SectionTitle> 2.3 of algorithm) and determines an alternative that </SectionTitle> <Paragraph position="0"> achieves the intended goal of not failing the course but also achieves another of the student's domain goals: receiving credit for the course. This better alternative is then presented to the student.</Paragraph> </Section> </Section> <Section position="7" start_page="219" end_page="219" type="metho"> <SectionTitle> 6. Future Work and Conclusion </SectionTitle> <Paragraph position="0"> Future work should include incorporation of existing methods for inferring the user's goals from an utterance and also should include a component for mapping between the Horn clause representation used by the program and the English surface form.</Paragraph> <Paragraph position="1"> An interesting next step would be to investigate combining the present work with methods for varying an explanation from an expert system according to the user's knowledge of the domain. In some domains it is desirable for an expert system to support explanations for users with widely diverse backgrounds. To provide this support an expert system should also tailor the content of its explanations according to the user's knowledge of the domain. An expert system currently being developed for the diagnosis of a child's learning disabilities and the recommendation of a remedial program provides a good example (Jones and Poole 1985).</Paragraph> <Paragraph position="2"> Psychologists, administrators, teachers, and parents are all potential audiences for explanations. As well, members within each of these groups will have varying levels of expertise in educational diagnosis. Cohen and Jones (1986; see also van Beck and Cohen) suggest that the user model begin with default assumptions based on the user's group and be updated as information is exchanged in the dialogue. In formulating a response, the system determines the information relevant to answering the query and includes that portion of the information believed to be outside of the user's knowledge.</Paragraph> <Paragraph position="3"> We have argued that, in generating explanations, we can and should consider the user's goals, plans for achieving goals, and preferences among these goals and plans. Our implementation has supported the claim that this approach is useful in an expert advice-giving environment where the user and the system work cooperatively towards common goals through the dialogue and the user's utterances may be viewed as actions in plans for achieving those goals. We believe the present work is a small but nevertheless worthwhile step towards better and user-specific explanations from expert systems.</Paragraph> </Section> class="xml-element"></Paper>