File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/evalu/02/c02-1107_evalu.xml
Size: 3,493 bytes
Last Modified: 2025-10-06 13:58:47
<?xml version="1.0" standalone="yes"?> <Paper uid="C02-1107"> <Title>Example-based Speech Intention Understanding and Its Application to In-Car Spoken Dialogue System</Title> <Section position="8" start_page="1" end_page="1" type="evalu"> <SectionTitle> 1. Morphological and dependency anal- </SectionTitle> <Paragraph position="0"> ysis: For the purpose of example-based speech understanding, the morphological and dependency analyses are given to each user's utterance by referring the dictionary and parsing rules. Morphological analysis is executed by Chasen (Matsumoto et al., 99). Dependency parsing is done based on a statistical approach (Matsubara et al., 2002).</Paragraph> <Paragraph position="1"> 2. Intentions inference: As section 3 and 4 explain, the intention of the user's utterance is inferred according to the degree of similarity of it with each sentence in a corpus, and the intentions 2-gram probabilities. null 3. Action: The transfer rules from the user's intentions to the system's actions have been made so that the system can work as the user intends. We have already made the rules for all of 78 kinds of intentions. The system decides the actions based on the rules, and executes them. After that, it revises the context stack. For example, if a user's utterance is &quot;kono chikaku-ni washoku-no mise ari-masu-ka (Is there a Japanese restaurant near here?)&quot;, its intention is &quot;search&quot;. Inferring it, the system retrieves the shop information database by utilizing the key-words such as &quot;washoku (Japanese restaurant)&quot; and &quot;chikaku (near)&quot;. 4. Response generation: The system responds based on templates which include the name of shop, the number of shops, and so on, as the slots.</Paragraph> <Section position="1" start_page="1" end_page="1" type="sub_section"> <SectionTitle> 6.2 Evaluation of the System </SectionTitle> <Paragraph position="0"> In order to confirm that by understanding the user's intention correctly the system can behave appropriately, we have made an experiment on the system. We used 1609 of driver's utterances in Section 5.2.1 as the learning data, and the intentions 2-gram probabilities learned by 174 of dialogues in Section 5.1. Furthermore, 60 of driver's utterances which are not included in the learning data were used for the test. We have compared the results of the actions based on the inferred intentions with those based on the given correct intentions. The results have been classified into four groups: correct, partially correct, incorrect, and no action.</Paragraph> <Paragraph position="1"> The experimental result is shown in Table 2. The correct rate including partial correctness provides 76.7% for the giving intentions and 60.0% for the inferred intentions. We have confirmed that the system could work appropriately if correct intentions are inferred.</Paragraph> <Paragraph position="2"> The causes that the system based on given intentions did not behave appropriately for 14 utterances, have been investigated. 6 utterances are due to the failure of keywords processing, and 8 utterances are due to that they are out of the system's expectation. It is expected for the improvement of the transfer rules to be effective for the former. For the latter, it is considered to turn the responses such as &quot;I cannot answer the question. If the questions are about ***,I can do that.&quot;</Paragraph> </Section> </Section> class="xml-element"></Paper>