File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/04/p04-1028_intro.xml
Size: 3,272 bytes
Last Modified: 2025-10-06 14:02:23
<?xml version="1.0" standalone="yes"?> <Paper uid="P04-1028"> <Title>Mining metalinguistic activity in corpora to create lexical resources using Information Extraction techniques: the MOP system</Title> <Section position="3" start_page="0" end_page="0" type="intro"> <SectionTitle> 1 Introduction </SectionTitle> <Paragraph position="0"> Availability of large-scale corpora has made it possible to mine specific knowledge from free or semi-structured text, resulting in what many consider by now a reasonably mature NLP technology. Extensive research in Information Extraction (IE) techniques, especially with the series of Message Understanding Conferences of the nineties, has focused on tasks such as creating and updating databases of corporate join ventures or terrorist and guerrilla attacks, while the ACQUILEX project used similar methods for creating lexical databases using the highly structured environment of machine-readable dictionary entries and other resources. Gathering knowledge from unstructured text often requires manually crafting knowledge-engineering rules both complex and deeply dependent of the domain at hand, although some successful experiences using learning algorithms have been reported (Fisher et al., 1995; Chieu et al., 2003).</Paragraph> <Paragraph position="1"> Although mining specific semantic relations and subcategorization information from free-text has been successfully carried out in the past (Hearst, 1999; Manning, 1993), automatically extracting lexical resources (including terminological definitions) from text in special domains has been a field less explored, but recent experiences (Klavans et al., 2001; Rodriguez, 2001; Cartier, 1998) show that compiling the extensive resources that modern scientific and technical disciplines need in order to manage the explosive growth of their knowledge, is both feasible and practical. A good example of this NLP-based processing need is the MedLine abstract database maintained by the National Library of Medicine1 (NLM), which incorporates around 40,000 Health Sciences papers each month. Researchers depend on these electronic resources to keep abreast of their rapidly changing field. In order to maintain and update vital indexing references such as the Unified Medical Language System (UMLS) resources, the MeSH and SPECIALIST vocabularies, the NLM staff needs to review 400,000 highly-technical papers each year. Clearly, neology detection, terminological information update and other tasks can benefit from applications that automatically search text for information, e.g., when a new term is introduced or an existing one is modified due to data or theory-driven concerns, or, in general, when new information about sublanguage usage is being put forward. But the usefulness of robust NLP applications for special-domain text goes beyond glossary updates. The kind of categorization information implicit in many definitions can help improve anaphora resolution, semantic typing or acronym identification in these corpora, as well as enhance &quot;semantic rerendering&quot; of special-domain ontologies and thesaurii (Pustejovsky et al., 2002).</Paragraph> <Paragraph position="2"> In this paper we describe and evaluate the</Paragraph> </Section> class="xml-element"></Paper>