File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/evalu/06/p06-1108_evalu.xml
Size: 1,723 bytes
Last Modified: 2025-10-06 13:59:46
<?xml version="1.0" standalone="yes"?> <Paper uid="P06-1108"> <Title>Event Extraction in a Plot Advice Agent</Title> <Section position="9" start_page="862" end_page="862" type="evalu"> <SectionTitle> 5.4 Results </SectionTitle> <Paragraph position="0"> The results are given in Table 5. The majority of the advice was rated overall as &quot;fair.&quot; Only one story was given &quot;poor&quot; advice, and a few were given &quot;good&quot; advice. However, most advice rated as &quot;good&quot; was the advice generated by &quot;excellent&quot; stories, which generate less advice than other types of stories. &quot;Poor&quot; stories were given almost entirely &quot;fair&quot; advice, although once &quot;poor&quot; advice was generated. In general, the teacher found &quot;coarse-grained&quot; advice to be very useful, and was very pleased that the agent could detect when the student needed to re-read the story and when a student did not need to write any more. In some cases the specific advice was shown to help provide a &quot;crucial detail&quot; and help &quot;elicit a fact.&quot; The advice was often &quot;repetitive&quot; and &quot;badly phrased.&quot; The specific advice came under criticism for often not &quot;being directed enough&quot; and for being &quot;too literal&quot; and not &quot;inferential enough.&quot; The rater noticed that &quot;The program can not differentiate between an unfinished story...and one that is confused.&quot; and that &quot;Some why, where and how questions could be used&quot; in the advice.</Paragraph> </Section> class="xml-element"></Paper>