File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/83/a83-1011_concl.xml
Size: 4,249 bytes
Last Modified: 2025-10-06 13:55:58
<?xml version="1.0" standalone="yes"?> <Paper uid="A83-1011"> <Title>OIL ANALYSIS AVAILABLE DRILL STEM TESTS PERFORMED MECHANICAL LOG FILE WELL DRILLING PROBLEM WILDCAT WELL SHELL CURRENT OPERATOR WELL SHOW OF OIL > 2000&quot; TEXACO ORIGINAL OPERATOR</Title> <Section position="6" start_page="70" end_page="71" type="concl"> <SectionTitle> \[3\] Conceptual Complexity: </SectionTitle> <Paragraph position="0"> The number of lines generated in the target query language.</Paragraph> <Paragraph position="1"> We realize that some users will try to maximize efficient communication by minimizing the number of complete interactions. At the same time, still other users will find it easier to enter a minimal request and let the system ask for more information as needed. So while there is an apparent trade-off between the length of the initial request (surface complexity) and the number of interactions needed to fully interpret that request (interactive complexity), we cannot evaluate EXPLORER's effectiveness by trying to minimize one or the other.</Paragraph> <Paragraph position="3"> We must also note that conceptual complexity as it is defined here can only give a very rough idea of the conceptual content and information processing involved. It might be tempting to look for conceptual complexity as a function of surface complexity and interactive complexity, but any simple decomposition along these lines will be misleading- If a user changes the scale of a map I0 times, we will see a large interactive complexity with no change in conceptual complexity. A more sensitive set of complexity measures will have co be designed before we can expect to see correlations across the various measures.</Paragraph> <Paragraph position="4"> The results of our trial test period are sue~arized in Table 1. We see that the average surface complexity of all requests is 25 words, with requests ranging from I to 87 words in length. Each request averaged 7 complete interactions, with some taking as few as 3 and others requiring as many as 14 user-interactions. The target query language requests averaged ll lines of code, with a range between 9 and 22 lines.</Paragraph> <Paragraph position="5"> In terms of performance categories, fully 67% of all requests were A2 requests. Only 10% qualified as AI requests, with the remaining 23% falling into the A3 category.</Paragraph> <Paragraph position="6"> A~ requests tended co be slightly more complicated on average than A2 requests, but it is important Co note chat the most complex requests in terms of all three measures were nevertheless A2 requests. The relatively small precenCage of AI requests may not be significant given the size of our sample, but it is likely that the failed A3 requests would have been A2 requests had they been processed successfully. As the system's hit rate improves, we expect to see the A2 rate rise while the AI rate remains stable. It is interesting co note that the average surface complexity of the AI requests is very close ~o the average surface complexity of the A2 requests.</Paragraph> <Paragraph position="7"> Almost all of the errors underlying our A3 requests were programmer errors due to an imperfect understanding of user vocabulary or the target query language. This was expected and can only be rectified with continued testing by qualified users. We are extremely pleased to have a 77% success rate at this initial stage of program test-development: EXPLORER's error rate should decrease over time as changes are made to correct the errors we uncover.</Paragraph> <Paragraph position="8"> Our experience with EXPLORER suggests that it is impossible to complete a system of this complexity without some such testing phase for feedback purposes. A high degree of cooperation between program designers and intended users is therefore critical in these final stages of system development.</Paragraph> <Paragraph position="9"> Our next step is to continue testing revised versions of EXPLORER, expanding our user population as the system becomes more competent. At the current rate of user feedback, we project a 3-6 month period of system revisions before we freeze the implementation for a final evaluation.</Paragraph> </Section> class="xml-element"></Paper>