File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/95/m95-1014_metho.xml

Size: 23,659 bytes

Last Modified: 2025-10-06 14:14:00

<?xml version="1.0" standalone="yes"?>
<Paper uid="M95-1014">
  <Title>The NYU System for MUC- 6 or Where's the Syntax ?</Title>
  <Section position="2" start_page="167" end_page="170" type="metho">
    <SectionTitle>
THE SYSTEM
</SectionTitle>
    <Paragraph position="0"> We exaggerate, of course, the radicalness of our change since MUC-5 [4] (and since the MUC-6 dry run , which was conducted with our traditional syntactic system) . Several components were direct descendants of earlier modules : the dictionary was Comlex Syntax [3] ; the lexical analyzer (for names, etc .) had bee n gradually enhanced at least since MUC-3 ; the concept hierarchy code and reference resolution were essentiall y unchanged from earlier versions . In addition, our grammatical approach was not entirely abandoned ; our noun group patterns were a direct adaptation of the corresponding portion of our grammar, just as Hobbs ' patterns were an adaptation from his grammar .' And, as we shall see, more of the grammar crept in a s our effort progressed . In essence, one could say that our MUC-6 system was built (in late August and earl y September, 1995) by replacing the parser and semantic interpreter of our earlier system by additional sets o f finite-state patterns.</Paragraph>
    <Paragraph position="1">  The same system was used for all four MUC tasks (NE, CO, TE, and ST) ; the only difference lies in th e information which is generated when the processing of a document is complete .</Paragraph>
    <Paragraph position="2"> The text analysis operates in seven main stages : tokenization and dictionary look-up; four stages of pattern matching (basically, for names, noun groups, verb groups, and larger patterns) ; reference resolution ; and output (template or SGML) generation .</Paragraph>
    <Paragraph position="3"> Tokenization and Dictionary Look-u p Processing begins with the reading of the document and the identification of the relevant SGML-marke r passages. The body of the document is divided into sentences and then into tokens . Each token is looked up in our dictionaries . For general vocabulary, we use Cornlex Syntax, a broad-coverage dictionary of Englis h developed at NYU, which provides detailed syntactic information, but does not include any proper names . This is supplemented by several specialized dictionaries, includin g * a small gazetteer, which contains the names of all countries and most &amp;quot;major&amp;quot; citie s * a company dictionary, derived from the Fortune 50 0 * a government agency dictionary * a dictionary of common first names * a small dictionary of scenario-specific term s We also use the BBN POST tagger at this stage to determine the most likely part of speech of each worcl . 2 And, since both these grammars can trace their origins in part to the NYU Linguistic String Grammar, the approaches her e are very similar.</Paragraph>
    <Paragraph position="4"> 'We wish to thank BBN Systems and Technologies for providing us with this tagger.</Paragraph>
    <Paragraph position="5">  The input stage is followed by several stages of pattern matching . Each of these stages uses one or more set s of finite-state patterns to perform some reductions on the input string . The patterns are translated to LIS P procedures which are then compiled, so the pattern matching can proceed very efficiently . Each set of patterns involves one left-to-right scan of the sentence . Starting at each word, we identify the longest matching pattern (if any), use it to reduce the input sequence, and then continue with the nex t unmatched word .</Paragraph>
    <Paragraph position="6"> The first set of patterns corresponds essentially to Named Entity recognition : names of people ; names o f companies and other organizations ; locations ; dates; and numeric expressions, including money and percent ages. null A second, small set identifies possessive forms involving either common nouns or names as just identified . For the MUG-6 scenario, we added a third set for names of executive positions, such as &amp;quot;executive directo r for recall and precision&amp;quot; .</Paragraph>
    <Paragraph position="7"> The name recognition stage records the initial mention and type of each name ; subsequent mentions of a portion of that name will be recorded as aliases of the name . At this stage, a name will be recognized as being of a specific type (person, company, government organization, or other organization) if it is defined i n the dictionary, if it has a distinctive form, or if it is an alias of a name of known type . (Recognition base d on context is performed by subsequent stages .) Noun Group Recognitio n The second stage of pattern matching recognizes noun groups : nouns with their left modifier . In most cases , once part of speech ambiguities have been resolved (using a tagger, as we noted above), most decision s regarding noun group boundaries and structure can be made deterministically using local syntactic information . In some cases, however, the attachment can not be decided locally ; in such cases, we leave the modifie r unattached . For example, a present or past participle may mark the beginning of a noun group : He enjoys driving ranges more than any golfer I know .</Paragraph>
    <Paragraph position="8"> or may be part of a verb phras e He enjoys driving cars.</Paragraph>
    <Paragraph position="9"> The noun group patterns are essentially a direct transcription of that portion of our English grammar int o our pattern language .</Paragraph>
    <Paragraph position="10"> Verb Group Recognitio n The third stage of pattern matching recognizes verb groups : simple tensed verbs (&amp;quot;sleeps&amp;quot;), and verbs with auxiliaries (&amp;quot;will sleep&amp;quot;, &amp;quot;has slept&amp;quot;, &amp;quot;was sleeping&amp;quot;, etc .). Both active and passive verb forms are recognized .</Paragraph>
    <Section position="1" start_page="168" end_page="170" type="sub_section">
      <SectionTitle>
Semantic Pattern Recognition
</SectionTitle>
      <Paragraph position="0"> The fourth and final stage of pattern recognition involves the scenario-specific patterns . These include patterns which recognize larger noun phrase structures than simple noun groups, and patterns which recognize clausa l structures .</Paragraph>
      <Paragraph position="1"> The noun phrase patterns include noun phrase arguments, such as &amp;quot;president of General Motors &amp;quot; ; apposition . such as &amp;quot;Fred Smith, president of General Motors&amp;quot; ; age modifiers, such as &amp;quot;Fred Smith, 107 year s old&amp;quot; : and relative clauses . Noun phrase conjunction is also handled at this stage .</Paragraph>
      <Paragraph position="2"> The clausal patterns play the main role in this scenario, recognizing the basic events of executive succession : having jobs, starting jobs, leaving jobs, succeeding other people in jobs . Each recognized pattern is translated into an event predication in the logical form, with one of the following forms : * start-job(person, position)  For each subject-verb-object relationship, we create a separate pattern for each of the plausible syntacti c forms, including the active clause, the passive clause, the relative clause (active or passive), the reduce d relative clause, and the conjoined verb phrase .</Paragraph>
      <Paragraph position="3"> These patterns also serve to resolve some type ambiguities . If we have the sentence P. T. Barnum took the helm of F . W. Woolworth .</Paragraph>
      <Paragraph position="4"> the system will classify &amp;quot;P. T. Barnum&amp;quot; as a person and &amp;quot;F. W . Woolworth&amp;quot; as a company . Reference Resolutio n The various stages of pattern matching produce a logical form for the sentence, consisting of a set of entitie s (for this scenario, people, organizations, and positions) and a set of events which refer to these entities . These must now be integrated with the entities and events from the prior discourse (prior sentences in the article) . Reference resolution examines each entity and event in logical form and decides whether it is an anaphori c reference to a prior entity or event, or whether it is new and must be added to the discourse representation .' If the noun phrase has an indefinite determiner or quantifier (e .g., &amp;quot;a&amp;quot;, &amp;quot;some&amp;quot; , &amp;quot;any&amp;quot;, &amp;quot;most&amp;quot;) it is assume d to be new information . Otherwise a search is made through the prior discourse for possible antecedents . An antecedent will be accepted if the class of the anaphor (in our classification hierarchy) is equal to or mor e general than that of the antecedent, if the anaphor and antecedent match in number, and if the modifiers i n the anaphor have corresponding arguments in the antecedent . Special tests are provided for names, sinc e people and companies may be referred to by a subset of their full names ; a match on names takes precedenc e over other criteria .</Paragraph>
      <Paragraph position="5"> Reference resolution first seeks an antecedent in the current sentence, then in the preceding sentence, the n in the one before that, etc . The current sentence is scanned from right to left (i .e., the most recent anteceden t is preferred) . Prior sentences are scanned from left to right ; this implements, in a crude way, a preference for the subjects of prior sentences .</Paragraph>
      <Paragraph position="6"> Response Generatio n For all the tasks, we use Tipster-style annotations as an intermediate representation for the information to be reported . A Tipster annotation includes a type, a set of start/end byte offsets, and a set of attributes [2] . The name recognition stage generates ENAMEX, PNAMEX, TIMEX, and NUMEX annotations as a by product of the recognition process, so Named Entity response generation only requires that the annotation s be converted to SGML . For the Coreference task, the coreference links created by reference resolution ar e converted to annotations and thence to SGML . For the Template Element task, the set of discourse entities i s scanned for entities of the appropriate type (people and organizations), plurals and some indefinite reference s are eliminated, and the remainder are converted to templates .</Paragraph>
      <Paragraph position="7"> The only substantial processing for response generation occurs in the Scenario Template task. Here a certain amount of inferencing is needed to extract the actual events from those explicitly stated in the article . For example, if the article says &amp;quot;Fred, the president of Cuban Cigar Corp ., was appointed vice president of Microsoft.&amp;quot; we want to infer that Fred left the Cuban Cigar Corp . This is done using the inferences add-job is used in situations where there is an explicit indication that the position being taken on is an additional position : &amp;quot;Fred was appointed to the additional post of executive vice president . &amp;quot; 'In some cases, such as apposition, the anaphoric relation is determined by the syntax . Such cases are detected and marked by the pattern-matching stages, and checked by reference resolution before other tests are made .</Paragraph>
      <Paragraph position="8">  &amp;quot;Fred was the president of Legal Beagle Inc . Fred was succeeded by Harry .&amp;quot; we need to infer that Harry is becoming the president of Legal Beagle . If we have a predicate of the form succeeds(person 1,person2) , the system sees what other information it has about personl or person2 . If it has information about the job person2 has or is leaving, but no information about personl, it adds information about the job(s) person l is starting. Similarly, if it has information about personl, it adds information about the job(s) person2 i s leaving.</Paragraph>
    </Section>
  </Section>
  <Section position="3" start_page="170" end_page="173" type="metho">
    <SectionTitle>
EXAMPLE
</SectionTitle>
    <Paragraph position="0"> To see how these stages of processing work in concert to produce a template, consider the crucial sentence s from the walkthrough article, which produce two of the three succession events : Mr. James, 57 years old, is stepping down as chief executive officer on July 1 and will retire as chairman at the end of the year . He will be succeeded by Mr. Dooner, 45 .</Paragraph>
    <Paragraph position="1"> The individual tokens in these sentences are gradually aggregated into larger units by the stages of processing , as follows : dictionary look-up Dictionary look-up, combined with part-of-speech tagging, determines the syntacti c features of each word . In addition, the dictionary includes some multi-word items which appear in th e walkthrough sentences ; these are reduced to single lexical units :  and will retire as chairman He will be succeeded by Mr . Dooner, 45 The first is an example of an active clause pattern ; the second an example of a conjoined clause patter n (of the form &amp;quot;and&amp;quot;+verb phrase) and the third is an example of a passive pattern. Each is translate d into an &amp;quot;event&amp;quot; predication in logical form (the first two with the predicate leave-job, the third with the predicate succeeds) .</Paragraph>
    <Paragraph position="2"> reference resolution Reference resolution links the &amp;quot;He&amp;quot; in the second sentence to the most recent previously mentioned person, in this case Mr . James.</Paragraph>
    <Paragraph position="3"> response generation : inferencing at this point our discourse structure contains three predicates : Mr. James is leaving as chief executive officer ; Mr. James is leaving as chairman ; and Mr. Dooner is succeeding Mr . James. In processing the succeeds predicate, the inferencing component notes that we have explicit information on the positions that Mr . James is vacating, but not on the positions Mr . Dooner is taking on . It therefore adds event predicates asserting that Mr . Dooner is starting the jobs which Mr . James is vacating : chairman and chief executive officer .</Paragraph>
    <Paragraph position="4"> Once this has been done, the event predicates are organized based on the company and position involved (since this is how the templates are structured), and then converted to templates .</Paragraph>
    <Paragraph position="5">  Our relative standing on these tasks for the most part accorded with the effort we invested in the task s over the last few months .</Paragraph>
    <Paragraph position="6"> For Named Entity, our pattern set built on work done for previous MUCs . From mid-August to early September we spent several weeks tuning Named Entity annotation, using the Dry Run Test corpus fo r training, and pushed our performance to 90% recall, 94% precision on that corpus . Our results on the forma l test . as could be expected, were a few points lower . There was no shortage of additional patterns to acid i n order to improve performance (a few are discussed in connection with our walkthrough message), but at that point our focus shifted entirely to the Scenario Template task .</Paragraph>
    <Paragraph position="7"> For the Scenario Template task, we spent the first week studying the corpus and writing some of the basi c code needed for the pattern-matching approach, which we were trying for the first time . The remainder of th e time was a steady effort of studying texts, adding patterns, and reviewing outputs . Our first run was mad e 10 days into the test period ; we reached 29% recall one week after the first run and 48% two weeks after th e 17 2 first run ; our final run on the training corpus reached 54% recall (curiously, precision hovered close to 70 % throughout the development period) .</Paragraph>
    <Paragraph position="8"> For the final system, we attempted to fill all the slots, but did not address some of the finer detail s of the task . We did not record &amp;quot;interim&amp;quot; occupants of positions, did not do the time analysis required for ON_THEJOB (we just used NEW_STATUS), and did not distinguish related from entirely differen t organizations in the REL_OTHER_ORG slot . In general, it seemed to us that -- given the limited time -adding more patterns yielded greater benefits than focusing on these details .</Paragraph>
    <Paragraph position="9"> NYU did relatively well on the Scenario Template task . We can hardly claim that this was the result o f a new and innovative system design, since our goal was to gain experience and insight with a design which others had proven successful . Perhaps it was a result of including patterns beyond those found in the forma l training. In particular, w e * added syntactic variants (relatives, reduced relatives, passives, etc .) of patterns even if the variant s were not themselves observed in the training corpu s * studied some 1987 Wall Street Journal articles related to promotions (in particular, we searched fo r the phrase &amp;quot;as president&amp;quot;), and added the constructs found ther e Perhaps we just stayed up late a few more nights than other sites .</Paragraph>
    <Paragraph position="10"> We did not do any work specifically for the Coreference and Template Element tasks, although ou r performance on both these tasks gradually improved as a result of work focussed on Scenario Templates . Performance on the Walkthrough Message Performance on the walkthrough message was not very different from that on the test corpus as a whole : Performance on the Walkthrough Messag e  In addition, the walkthrough article pointed out a bug in the propagation of type information from initia l mentions to subsequent mentions of a name .</Paragraph>
    <Paragraph position="11"> Two errors accounted for most of our incorrect slots on the Scenario Template task . First, we did not have &amp;quot;hire&amp;quot; among our set of appoint verbs, which included &amp;quot;appoint&amp;quot;, &amp;quot;name&amp;quot;, &amp;quot;promote&amp;quot;, and &amp;quot;elect&amp;quot; ; this caused us to lose one entire succession event . (It also led to NE and TE errors, since we didn't have th e context pattern &amp;quot;hired from .. .&amp;quot; which would have led us to tag &amp;quot;J . Walter Thompson&amp;quot; as a company i n the phrase &amp;quot;hired from J . Walter Thompson&amp;quot; .) Second, we generated duplicate (spurious) instances of th e IN_AND_OUT templates for the &amp;quot;chief executive officer&amp;quot; position .</Paragraph>
  </Section>
  <Section position="4" start_page="173" end_page="173" type="metho">
    <SectionTitle>
THE ROLE OF SYNTAX
</SectionTitle>
    <Paragraph position="0"> The goal we had set for ourselves was to &amp;quot;do a MUG &amp;quot; using the pattern-matching approach, in order t o better understand the relative strengths and weaknesses of the pattern-matching (partial parsing) and the full-parsing approaches . We consider ourselves successful in meeting this goal ; we implemented the pattern matching scheme quickly and did quite well in generating Scenario Templates . And the approach did indee d mitigate the shortcomings of the full parsing approach which we outlined in the introduction .</Paragraph>
    <Paragraph position="1"> We also experienced first-hand some of the shortcomings of the partial-parsing, semantic pattern approach .</Paragraph>
    <Paragraph position="2"> Syntax analysis provides two main benefits : it provides generalizations of linguistic structure across differen t semantic relations (for example, that the structure of a main clause is basically the same whether the verb i s &amp;quot;to succeed&amp;quot; or &amp;quot;to fire&amp;quot;), and it captures paraphrastic relations between different syntactic structures (fo r example, between &amp;quot;X succeeds Y&amp;quot;, &amp;quot;Y was succeeded by X&amp;quot;, and &amp;quot;Y, who succeeded X&amp;quot;) . These benefits are lost when we encode individual semantic structures . In particular, in our system, we had to separately encode the active, passive, relative, reduced relative, etc . patterns for each semantic structure . These issue s are hardly new; they have been well known at least since the syntactic grammar vs . semantic grammar controversies of the 1970 's.</Paragraph>
    <Paragraph position="3"> How, then, to gain the benefits of clause-level syntax within the context of a partial parsing system? On e approach, which we have implemented in the weeks since MUC, has been clause level patterns which ar e expanded by metarules.' As a simple example of a clause-level pattern, conside r (defclausepattern runs &amp;quot;np-sem(C-person) vg(C-run) np-sem(C-company) : person-at=l . attributes, verb-at=2 . attributes , company-at=3. attributes&amp;quot; when-run) This specifies a clause with a subject of class C-person, a verb of class c-run (which includes &amp;quot;run&amp;quot; and &amp;quot;head&amp;quot;), and an object of class C-company.7 This is expanded into patterns for the active clause (&amp;quot;Fred run s IBM&amp;quot; ), the passive clause (&amp;quot;IBM is run by Fred .&amp;quot;), relative clauses (&amp;quot;Fred, who runs IBM, . . .&amp;quot; and &amp;quot;IBM , which is headed by Fred, . . .&amp;quot;), reduced relative clauses (&amp;quot;IBM, headed by Fred, ...&amp;quot;) and conjoined ver b phrases (&amp;quot; ... and runs IBM&amp;quot;, &amp;quot;and is run by Fred&amp;quot;) .</Paragraph>
    <Paragraph position="4"> Using defclausepattern reduced the number of patterns required and, at the same time, slightly irnproved coverage because -- when we had been expanding patterns by hand -- we had not included al l expansions in all cases .</Paragraph>
    <Paragraph position="5"> The use of clause-level syntax to generate syntactic variants of a semantic pattern is even more importan t if we look ahead to the time when such patterns will be entered by users rather than computational linguists . We can expect a computational linguist to consider all syntactic variants, although it may be a small burden ; we cannot expect the same of a typical user .</Paragraph>
    <Paragraph position="6"> We expect that users would enter patterns by example, and would answer queries to create variants o f the initial pattern . As a first step in this direction, we have coded a non-interactive pattern-by-example procedure which takes a sentence which is prepared as an exemplar of a pattern, analyzes it with the stage s of pattern matching described above, and then converts the resulting units to elements of a pattern . For  Each of the four exemplars would be converted to a pattern ; the system would recognize that this has th e basic form of a clause and generate a corresponding defclausepattern which generates the predicate give n by : event. In order for this to be a viable entry procedure for non-specialists, this will have to be made int o an interactive interface and difficult issues will have to be addressed about how sentence constituents shoul d be appropriately generalized to create pattern elements . However, this begins to suggest how users withou t detailed system knowledge might be able to create suitable patterns .</Paragraph>
    <Paragraph position="7"> This most recent explorations also indicate how syntax can &amp;quot;creep back &amp;quot; into a system from which it was unceremoniously ejected . In the pattern matching approach, we no longer have a monolithic grammar, bu t we are now able to take advantage of the syntactic regularities of both noun phrases and clauses . Noun group syntax remains explicit, as one phase of pattern matching . Clause syntax is now utilized in the metarules fo r defining patterns and in the rules which analyze example sentences to produce patterns .</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML