File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/93/m93-1012_metho.xml

Size: 30,099 bytes

Last Modified: 2025-10-06 14:13:23

<?xml version="1.0" standalone="yes"?>
<Paper uid="M93-1012">
  <Title>ERR UND OVG SUB MinE MaxE REC PRE P&amp;R TF</Title>
  <Section position="1" start_page="0" end_page="122" type="metho">
    <SectionTitle>
LANGUAGE SYSTEMS INC :
DESCRIPTION OF THE DBG SYSTE M
AS USED FOR MUC-51
</SectionTitle>
    <Paragraph position="0"> analysis of natural language text is an in-depth text understanding system that employs linguistic as wel l as other analytical techniques to interpret the text . Our DBG (Data Base Generation) natural language processing system performs full-scale linguistic analysis of text in order to produce a system-internal text level representation of the content of the text . This representation is composed of a set of entity and event frame structures, interrelated to reflect the organization and content of the text . This representation of the text can then be mapped into any data structure required by a downstream application, such as th e templates specified for the MUC-5/Tipster applications. DBG has been designed as a single core syste m for handling texts of different types in different domains for a variety of applications . Application types for which DBG has provided the input include information extraction and database generation tasks suc h as MUC-5, message fusion (the combination of information derived from various kinds of sources, includin g text; see [1]), and the translation of text into another language using spoken input and output ([2]) .</Paragraph>
    <Paragraph position="1"> It is clear to us that our DBG system, while achieving MUC-5 results comparable to EME scores attaine d by Tipster contractors in the MUC-5 tests, is still far from achieving the level of performance that we believ e is possible for it . LSI's official MUC-5 P&amp;R score for English Microelectronics (EME) texts was 42 .74, which represents a considerable improvement over our MUC-4 P&amp;R score of 18 .87 on TST3 . As we continue to incorporate improved versions of the various components of our natural language processing system (Figure 1 ) and to exploit more fully the capabilities of existing components, we expect that our scores will continue t o improve .</Paragraph>
    <Paragraph position="2"> Founded upon research performed over the last twenty years, the DBG system has been under actua l development for the last seven years . The basic architecture of the core system has remained the same ove r that time. Because the system is modular, the individual modules can be redesigned and updated without affecting the rest of the system . Most of the current modules have been redesigned or extended within th e last three years .</Paragraph>
  </Section>
  <Section position="2" start_page="122" end_page="125" type="metho">
    <SectionTitle>
INNOVATIVE ASPECTS OF THE SYSTE M
</SectionTitle>
    <Paragraph position="0"> Among the innovative aspects of the DBG system as used for MUC-5 are a flexible frame-based concep t hierarchy that is accessible at every stage of processing ; a principle-based syntactic parser which enables th e identification of sentence-level event-entity relations very early in processing ; an integrated ability to handl e incomplete information and structures ; and the frame-based text-level representation mentioned above whic h allows for intersentential reference resolution and for the explicit representation of the event-entity relation s and implicit content of the text . These relations were then easily mapped into EME templates.</Paragraph>
    <Paragraph position="1"> LSI's main area of development for MUC-5 was the extension of the knowledge contained in the concep t hierarchy and the use of this knowledge at every stage of processing. Although the frame-based hierarchy was a component of the DBG system as early as MUCK-2, it was not exploited significantly until MUC-5 .</Paragraph>
    <Paragraph position="2"> This, more than any other factor, was responsible for the increase in our performance over MUC-4 . Each item in the lexicon is associated with a concept frame, as illustrated in Figure 2 . The frames or concept node s bear &amp;quot;isa&amp;quot; (set membership) relations to other concepts in the hierarchy . During lexical analysis, the lexical items in the text are linked to concepts in the hierarchy . These concept links then provide the framewor k for producing the set of instantiated concept frames and links (the frame-based text-level representation ) which is the final output of internal DBG processing . The frame-based concept hierarchy also allows fo r semantic checking at any point in processing and provides a mechanism for the inheritance of features an d other information to direct descendants in the hierarchy.</Paragraph>
    <Paragraph position="3"> A related innovative DBG component is the Word Acquisition Module (WAM), which uses morphologica l analysis to provide grammatical category information for words for which no lexical entry exists . Based on the category assignment, a lexical entry and an &amp;quot;isa&amp;quot; link to a concept frame are automatically generated for each unknown item to allow complete processing of all sentences containing unknown words .</Paragraph>
    <Paragraph position="4"> The DBG syntactic parser as implemented for MUC-5 has a number of innovative aspects . The parser is to a great extent language-independent ; it produces structures which reflect the partial isomorphism holding between syntactic and semantic structures in order to increase accuracy of processing . While the goal is to produce complete parses, the parser is also robust enough to produce usable partial parses in the absence o f a complete parse .</Paragraph>
    <Paragraph position="5"> The design of the LSI parser is based on the Government-Binding theory of syntax . It is essentiall y a head-driven parser, and includes both bottom-up and expectation-based aspects . Argument structures associated with lexical items are projected into the syntax. Syntactic structure is determined from both item-specific lexical requirements as well as general requirements on syntactic structures (e .g., all sentences in languages like English require a subject) . The use of empty categories and syntactic chains, combine d with knowledge of event types contained in the concept hierarchy, enables the parser to associate thematic roles with entities expressed in noun phrases in a variety of construction types correctly and in a relativel y straightforward manner . Constructions which are usually assumed to include empty categories (i .e., phonetically null syntactic elements) include passives, embedded infinitival sentences, questions, and relative clauses. The rationale for this assumption is that in constructions of this type, words (usually verbs) whic h typically require either an external argument (an argument in the specifier position), or an internal argumen t (an argument in complement position) appear with no appropriate argument in one of these positions . Since syntactic structures are characterized as &amp;quot;projections&amp;quot; of lexical items, these positions are assumed to b e latently present, and linked to a phonetically realized argument in some other position via coindexing . The phonetically realized argument and the empty category thus form a syntactic &amp;quot;chain&amp;quot; . By using these chains , we can associate the usual thematic roles assigned to certain positions with a given overt noun phrase, even if the overt phrase is not in the usual position . So adherence to certain grammatical principles in conjunctio n with a well-engineered event knowledge base has enabled us to get something &amp;quot;for free&amp;quot;, as it were, in th e MUC-5 task .</Paragraph>
    <Paragraph position="6"> Several of the innovative aspects of the DBG frame-based text-level knowledge representation, which we will here call the LSI templates (as distinct from the MUC-5 application-oriented EME templates ) were not used extensively for MUC-5 . These include the ability to combine contextual information - e .g. ,</Paragraph>
    <Paragraph position="8"> &amp;quot;lithography&amp;quot; is a process following layering, where the %% pattern of a circuit is rendered onto the layered surface . %% It is done by exposing the surface to certain types o f</Paragraph>
    <Paragraph position="10"> slotorder(name, definiteness, quantity, type, description, process product, end product, process material, process_actions, process_conditions , process_ equipment, process_granularity, device, 'fp name', head_fp_id, display(no)) ,</Paragraph>
    <Paragraph position="12"> the silicon surface in oxidation, chemical vapor</Paragraph>
    <Paragraph position="14"> information derived from the header of a military message - with information extracted from the text ; the capability of interpreting degree-of-belief information and relating it to meta-levels of event structure ; and the incorporation of text grammar expectations as to the form and location of information within the text .</Paragraph>
    <Paragraph position="15"> Although these aspects of DBG are important in processing written and even transcribed voice messages i n the Air Force and Army messages to which the system has been applied, the MUC-5 application does no t depend heavily on outside information, and is not concerned with evaluation of the information received , and is characterized by more variability in information structure across texts .</Paragraph>
    <Paragraph position="16"> The relational organization of the DBG event and entity templates, however, was extremely useful i n MUC-5 processing. Because it resembles the object-oriented organizational structure of the EME templates , the generation of EME template relations was relatively straightforward . Also, the thematic roles of th e entity templates in relation to the events is explicit in the LSI templates and so can be easily associate d with the role-specific slots (e .g., manufacturer, distributor) in the EME templates . In addition, the ability to establish co-reference at the text level in the LSI templates prevents overgeneration of EME template s and facilitates the correct template linkage .</Paragraph>
    <Paragraph position="17"> In the next section, the basic processing is described and illustrated by an example sentence .</Paragraph>
    <Paragraph position="18"> . SYSTEM MODULES AND PROCESSING STAGE S Because the DBG system modules and the processing of text have been previously described in detail ([3] , [4]), we will here present only a brief summary of processing and then show a short example sentence to illustrate the innovative aspects of our system discussed in the previous section .</Paragraph>
    <Paragraph position="19"> The basic components of the DBG system are shown in the system diagram in Figure 1 . They are the preprocessing module, the lexical analysis module, the syntactic parse module, the semantic parse module , and the knowledge representation module. An additional module, the application interface, maps the extracted knowledge into appropriate data structures according to the requirements of a given downstrea m application. Processing is sequential-the output of each module is a data structure that serves as input to th e succeeding module and is then available to all later modules . Each module contains a processing mechanism and a knowledge base that includes general as well as domain-sensitive knowledge . The respective knowledg e bases, indicated in ovals, are the lexicon and morphological rules, the set of grammatical principles used t o construct the syntactic parse trees, the concept hierarchy, the discourse rules, and the rules for mapping int o external data structures. As described in the previous section, at the stage of lexical analysis, the lexica l items in the text are linked to nodes in the concept hierarchy, which enables information derived from the concept hierarchy to be used at any point in processing . Sample lexical items are shown in Figure 2 .</Paragraph>
    <Paragraph position="20"> In addition to the basic components, the DBG system has an Unexpected Inputs (UX) Subsystem tha t handles new or erroneous data and evaluates and records system performance. This subsystem consists of modules that are integrated into the system modules ; they are shown in the system diagram as small boxes inside the larger modules to which they apply. The two UX modules of the DBG system that were use d for MUC-5 are the Lexical Unexpected Inputs (LUX) module and the Word Acquisition Module (WAM) , both of which apply at the Lexical Analysis stage . The LUX module corrects errors by attempting partia l matches between unmatched words in the text and items in the lexicon, using rules based on certain erro r hypotheses. As mentioned previously, new or unidentified words are passed on to WAM, wherein word clas s information is assigned based on morphological analysis .</Paragraph>
    <Paragraph position="21"> To illustrate the processing, the lexical analysis, syntactic parse, and semantic parse for an exampl e sentence are shown in Figure 3, and the LSI internal templates for the same sentence, are shown in Figure 4 .</Paragraph>
    <Paragraph position="22"> The sentence used is the first sentence of the example text 2606871, i .e., &amp;quot;Hampshire Instruments has sold an x-ray lithography system to AT&amp;T Bell Laboratories .&amp;quot; The syntactic parse is created by projecting lexical items into elementary phrasal trees, which are the n linked according to subcategorization and selectional requirements, as well as general principles of grammar .  lxi(aux,has,has, [perf], [p(3),n(s)], [T, [], [xp('-agr','+past') ] ,['+agr','-past'], [], [has] ) lxi (third Ares, has, have, [ ] , [ ] , [ ] , [ ] , [strict (np) ] , [ ' +agr' , ' -past' ] , [ ] ,[have] )  lxi (past, sold, sell, [ ] , [ ] , [ ] , [ ] , [strict (np, bi_np (to) , np_prpt) ] , ['+agr','+past'],[],[sell] )lxi (pastpart, sold, sell, [ ] , [ ] , [ ] , [ ] , [strict (np, bi_np (to) , np_prpt) ] , ['-agr','+ ast'],[],[sell] )  lxi(det,an,an, []p, [], [], [], [strict (gp,ap,np)], [], [], [an] )  lxi(noun,'x-ray','x-ray', [], [], [], [], [], [], [], [x_ray] )  lxi (noun, 'lithography system', 'lithography system' , [ ] , [ ] , [ ] , [ ] , [ ] , [ ] ,[],[lithography_system] )  lxi(prep,to,to, [], [strict (argp) ], [], [], [to] ) lxi(to,to,to, [l, [], [], [], [l, [], [], [to] )8 lxi(noun,'AT&amp;T Bell Laboratories','AT&amp;T Bell Laboratories', [], [], [], [], [], [], [], [a_t_and t bell_labs] )</Paragraph>
  </Section>
  <Section position="3" start_page="125" end_page="128" type="metho">
    <SectionTitle>
SYNTACTIC PARSE OF EXAMPLE SENTENC E
</SectionTitle>
    <Paragraph position="0"/>
    <Paragraph position="2"> Among the principles which are most important in the current version of the parser are the projectio n principle, which constrains which positions in the tree can be assigned thematic roles, the extended projectio n principle, which attempts to link every clause to a subject, trace theory and the theta-criterion, which ensur e that every argument position has a one-to-one correspondence with a theta-role, and X-bar theory, whic h limits branching to binary structures following the X-bar schema . Structural characteristics of the tree are then matched with the thematic specifications of the lexical items heading the phrases in the tree .</Paragraph>
    <Paragraph position="3"> In the semantic parse, shown in Figure 3, the thematic roles are explicitly labeled and related by indice s to the verb `sell' . The AGENT is `Hampshire Instruments', the PATIENT is `an x-ray lithography system' , and the RECIPIENT is `AT&amp;T Bell Laboratories'. Other information, including the tense and voice o f the verb, is also given . Before a noun phrase can be assigned a given thematic role, it has to qualify bot h syntactically and by meeting semantic categorial requirements. This is established through checking the th e link into the concept hierarchy of the head noun of the noun phrase and matching it with the selectiona l information for the verb in the lexicon . All of this is done at the sentence level . Information from the semantic parses of the text is then used to generate and instantiate the LSI templates .</Paragraph>
    <Paragraph position="4"> The frame-based text-level data structures (LSI templates) for a text are generated on the basis of th e semantic parse of that text and the frame information associated with the lexical items in the semantic parse .</Paragraph>
    <Paragraph position="5"> There are three major steps in LSI template generation : 1) the first pass, in which templates for specifie d events and the related entity templates are generated ; 2) the second pass, in which entity templates are generated for MUC-5 relevant words that are not treated in the first pass ; and 3) template linking, in which co-reference relations are determined.</Paragraph>
    <Paragraph position="6"> An event template includes a set of empty slots which represent the various thematic roles associated wit h the event. The processing goal is to fill the thematic role slots of each of the event templates with a referenc e to an entity template . The slots for event and entity templates are pre-defined in our concept hierarchy .</Paragraph>
    <Paragraph position="7"> For MUC-5, the following event-related thematic roles were handled : agent, patient, experiencer, recipient , beneficiary, source, and location. For example, while attempting to fill the slots of a manufacture event, as i n &amp;quot;Sony manufactured a new DRAM&amp;quot; , &amp;quot;Sony&amp;quot; will trigger an agent template, and &amp;quot;a new DRAM&amp;quot; will trigger a patient template. The entity template has a pointer to the semantic parse node which triggered it . The slots of an entity template are determined by the entity type. For MUC-5, all the entity templates have at leas t the following slots: name, type, quantity, definiteness . Different types of entities have different additiona l slots representing the relevant attributes for the given type of entity . For example, company entities will also have the slots for location and nationality; equipment entities have slots for model, manufacturer, and wafer size; and so on. A few attributes such as location and granularity are themselves represented by templates .</Paragraph>
    <Paragraph position="8"> In the rule set for filling template slots, a rule is associated with a given slot . The rules make use of the indexed relational structure of the semantic parse, as well as the frame information associated with th e relevant lexical item. For example, to fill the agent slot of an event template, a semantic parse node which is co-indexed with the event verb, and which is labeled &amp;quot;AGENT&amp;quot; is required . Similarly for patient, recipient , and others.</Paragraph>
    <Paragraph position="9"> During the second pass, all the important MUC-5 words (specific processes, company names, equipment , devices) which were not handled in the first pass are processed, and each triggers an entity template . The filling of templates is carried out in the same way as in the first pass.</Paragraph>
    <Paragraph position="10"> After all the templates are filled, co-reference links among the entity templates are established . The rules used for MUC-5 were extremely simple and applied only to definite NPs, using the precedence of th e entity templates and compatibility of semantic features to determine co-reference . In the next version of thi s component, a focus list will track each of the entities in the discourse to facilitate reference resolution .</Paragraph>
    <Paragraph position="11"> In the templates for the example sentence, shown in Figure 4, all the relevant entities are handled durin g the first pass . &amp;quot;Sell&amp;quot; is identified as a critical EME domain verb and an event template is generated for it , with a set of theta role slots (agent, patient, recipient) to be filled . Template rules identified &amp;quot;Hampshire Instrument&amp;quot; as the agent, since it has the AGENT theta role label in the semantic parse (Figure 3) . An</Paragraph>
    <Paragraph position="13"> entity template is generated for it with slots to be filled . The template rules identified &amp;quot;x-ray lithography system&amp;quot; as the patient, since it has the PATIENT theta role label in the semantic parse . The PATIEN T theta role label was derived from the fact that this noun phrase was identified as the direct object of a verb whose internal argument is PATIENT . An entity template is also generated for it, with slots to be filled .</Paragraph>
    <Paragraph position="14"> &amp;quot;AT&amp;T Bell Laboratories&amp;quot; is determined as the RECIPIENT since it is the object of the preposition &amp;quot;to &amp;quot; (indirect object marker), and its semantic features qualify it as a recipient for the given verb . An entity template is generated for it, also with slots to be filled . The fills for the slots or attributes are derived by rules which utilize syntactic and semantic information about the noun phrase constituents .</Paragraph>
    <Paragraph position="15"> Co-reference resolution occurs next . &amp;quot;Lithography system&amp;quot; cannot be co-referent with &amp;quot;Hampshire Instruments&amp;quot; since they belong to different semantic classes, as we know from the frames ; &amp;quot;AT&amp;T Bell Laboratories&amp;quot; cannot co-refer with &amp;quot;Hampshire Instrument s&amp;quot; since they refer to different specific entities . (If the third entity were &amp;quot;the company,&amp;quot; however, instead of &amp;quot;AT&amp;T Bell Laboratories&amp;quot;, it would be considered a s a possible co-referent of &amp;quot;Hampshire Instruments&amp;quot;) .</Paragraph>
  </Section>
  <Section position="4" start_page="128" end_page="134" type="metho">
    <SectionTitle>
WALKTHROUGH TEXT
</SectionTitle>
    <Paragraph position="0"> The sample walkthrough text shows both strengths and weaknesses of our approach and of the DBG syste m as it has been implemented so far . As the last row of scores in Figure 5 shows, DBG scored quite well on thi s text, with a 79 .25% P&amp;R score. On closer examination, however, the performance of some of the individual modules is somewhat disappointing. For the walkthrough sample, the semantic parse of the first sentence is shown in Figure 6 and the DBG templates are shown in Figure 7 . The LSI EME output templates ar e shown in Figure 8 .</Paragraph>
    <Paragraph position="1"> In the LSI template generation for the walkthrough text, during the first pass two critical EME domai n verbs are identified : &amp;quot;use&amp;quot; and &amp;quot;sell&amp;quot;, Event templates are generated for them with a set of theta role slots (agent, patient, etc .) to be filled . For &amp;quot;use&amp;quot;, for example, &amp;quot;the stepper&amp;quot; was identified as the agent of th e predicate, since it has a co-indexed AGENT theta role label in the semantic parse . An entity template is generated for it with slots to be filled . Template rules for determining the patient identified &amp;quot;excimer laser&amp;quot; as the patient, since it has the PATIENT theta role label in the semantic parse . An entity template is also generated for it . To determine whether an attribute template should be generated for granularity , the sentence in which &amp;quot;excimer laser&amp;quot; occurred is searched for words indicating units of granularity such a s &amp;quot;micron&amp;quot; , &amp;quot;nm&amp;quot;, and if the search is successful, the previous co-indexed word indicating size is selected to fil l the gran-size slot.</Paragraph>
    <Paragraph position="2"> MUC-5 relevant entities not identified during the first pass are handled during the second pass . These are: &amp;quot;Nikon Corp&amp;quot;, &amp;quot;NSR-1755EX8A &amp;quot; , &amp;quot;a new stepper&amp;quot;, &amp;quot;64-Mbit DRAMS&amp;quot; , &amp;quot;a light source &amp;quot; , &amp;quot;the company&amp;quot; , &amp;quot;latest stepper&amp;quot;, &amp;quot;Nikon&amp;quot;, &amp;quot;the excimer laser&amp;quot;, &amp;quot;stepper&amp;quot;, &amp;quot;system&amp;quot; . An entity template is produced for each of these entities, with a set of attribute slots to be filled for th e entity. The size slot of &amp;quot;DRAMs&amp;quot; is derived from the co-indexed size unit word and the previous numera l (&amp;quot;64-Mbit&amp;quot;), and the granularity slot of &amp;quot;latest stepper&amp;quot; is filled with &amp;quot;0 .5 micron&amp;quot; by the above rules . In a more recently updated version of DBG, the module slot for equipment can be filled . So for &amp;quot;excimer lase r stepper&amp;quot; , the module slot for &amp;quot;stepper&amp;quot; has a pointer to the excimer laser template, based on the fact that a ) &amp;quot;excimer laser&amp;quot; modifies &amp;quot;stepper&amp;quot;, and b) the two have a part-whole relationship in the concept hierarchy . During the template linking phase, all of the templates referring to Nikon Corp (&amp;quot;Nikon Corp&amp;quot;, &amp;quot;th e company&amp;quot; , &amp;quot;Nikon&amp;quot;, &amp;quot;the company&amp;quot; ) are linked correctly. This is done by searching through the entities mentioned in the previous discourse and checking for semantic compatibility. The &amp;quot;NSR-1755EX8A&amp;quot; template did not get properly linked because appositives were not handled in this version of the system ; however &amp;quot;a new stepper&amp;quot; and &amp;quot;the stepper&amp;quot; are linked together correctly . Moreover, &amp;quot;the latest stepper&amp;quot; did not get linked to the previous stepper templates, which is correct since it they do not co-refer . On the other hand , &amp;quot;(the excimer laser) stepper&amp;quot; in the third sentence of the text, was incorrectly linked to &amp;quot;the latest stepper&amp;quot;</Paragraph>
    <Paragraph position="4"> quantity: 1 definiteness : definite definiteness : indefinite fp name : fp108 fp name: prev_tmp: [1 .1 .2 ]fp64 tmp_parent: [1] tmp_parent : [1]  head_fp_id: fp92 prev_tmp: [1.9] prev_tmp: [1 .3] tmp_parent: [1] next_tmp: [1 .10] frame_ref: *stepper*  in the second sentence, because the co-reference rules used for MUC-5 were too simple to distinguish betwee n two entities in different sentences having the same semantic features . The two occurrences of &amp;quot;excimer. laser&amp;quot; were linked together correctly, however, the entity &amp;quot;light source &amp;quot; did not get properly linked in, since our analysis was not complete .</Paragraph>
    <Paragraph position="5"> Reference resolution is now performed using a discourse focus list . To determine whether a definite nou n phrase refers to an entity which has already been mentioned in the discourse, it is compared with the mos t recent entity in the focus list, semantic feature checking is performed as before. Also, appositives such as &amp;quot;NSR-1755EX8A, a new stepper &amp;quot; and cases like &amp;quot;X as Y&amp;quot; are now handle d during the first pass . Thus &amp;quot;light source&amp;quot; can now be linked to &amp;quot;excimer laser&amp;quot; because it occurs in an &amp;quot;as&amp;quot; prepositional phrase and the two entities are of the same type.</Paragraph>
    <Paragraph position="6"> Following reference resolution, the mapping from the LSI internal templates to the MUC-5 output templates is relatively straightforward .</Paragraph>
    <Paragraph position="7"> Answers to the specific questions posed about the MUC-5 templates generated from the walkthroug h text are given below .</Paragraph>
    <Paragraph position="8"> (1) What information triggers the instantiation of each of the two LITHOGRAPHY objects ? &amp;quot;NSR-1755EX8A &amp;quot; triggered the first lithography object since it is a defined as a lithography system i n our concept hierarchy and there is a rule stating that equipment can trigger related process objects . In the second sentence, &amp;quot;the stepper&amp;quot; triggered the second lithography object by the rule given above . This is wrong because &amp;quot;the stepper &amp;quot; is co-referent with &amp;quot;NSR-1755EX8A&amp;quot; . The two templates were no t linked properly because &amp;quot;the stepper &amp;quot; gets linked to &amp;quot;a new stepper&amp;quot; , and DBG was unaware that &amp;quot;a ne w stepper&amp;quot; is &amp;quot;NSR-1755EX8A&amp;quot; since appositives were not handled in that version of the system . This is corrected in a more recent version of the system, where &amp;quot;the company's latest stepper&amp;quot; triggers the secon d lithography object .</Paragraph>
    <Paragraph position="9"> (2) What information indicates the role of the Nikon Corp . for each Microelectronics Capability ? The concept hierarchy contains the information that Nikon is the manufacturer of &amp;quot;NSR-1755EX8A &amp;quot; , so the manufacturer slot of &amp;quot;NSR-1755EX8A&amp;quot; is filled with a pointer to the &amp;quot;Nikon&amp;quot; object . The MANUFAC-TURER role for the second Microelectronics Capability was not filled .</Paragraph>
    <Paragraph position="10"> (3) Explain how your system captured the GRANULARIT Y information for &amp;quot;The company's latest stepper . &amp;quot; We got &amp;quot;0.5 micron&amp;quot; for &amp;quot;The company's latest stepper&amp;quot; (which is correct) by pattern matching and b y accident. When we were filling out the granularity slot for &amp;quot;The company 's latest stepper &amp;quot; , &amp;quot;0.5 micron &amp;quot; was the only granularity attribute available in the sentence, since &amp;quot;0 .45 micron&amp;quot; was already applied to &amp;quot;th e stepper&amp;quot; in the same sentence.</Paragraph>
    <Paragraph position="11"> (4) How does your system determine EQUIPMENT_TYPE for &amp;quot;the new stepper&amp;quot;? and for &amp;quot;the company's latest stepper&amp;quot;? This knowledge is specified in our concept hierarchy (see above) .</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML