File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/98/p98-2195_metho.xml

Size: 20,325 bytes

Last Modified: 2025-10-06 14:15:00

<?xml version="1.0" standalone="yes"?>
<Paper uid="P98-2195">
  <Title>Natural Language Access to Software Applications</Title>
  <Section position="2" start_page="1193" end_page="1193" type="metho">
    <SectionTitle>
1 The Speech Recognition Module
</SectionTitle>
    <Paragraph position="0"> Speech is the most natural form of communication for people and is felt to greatly extend the range of potential applications suitable for an NL interface.</Paragraph>
    <Paragraph position="1"> MELISSA currently adopts a 'black-box' approach to speech recognition, viz., speech is just an alternative to a keyboard. The results of speech recognition are stored and can be retrieved by sending a request to the component. The speech component itself can be controlled by voice commands. Before using the SRM, speakers have to 'train' it in order to adjust the general voice model to the specific speaker's voice characteristics.</Paragraph>
    <Paragraph position="2"> The speech interface sends recognized utterances as strings to other MELISSA components, but is not able to interact on a higher level with those components. In a subsequent phase the feedback and co-operation between the MELISSA core components and the SRM will be addressed.</Paragraph>
  </Section>
  <Section position="3" start_page="1193" end_page="1194" type="metho">
    <SectionTitle>
2 The Linguistic Processing Module
</SectionTitle>
    <Paragraph position="0"> The core of the LPM is based on the Advanced Language Engineering Platform (ALEP), the EU Commission's standard NLP development platform \[Simpkins 94\]. ALEP provides the functionality for efficient NLP: a 'lean' linguistic formalism (with term unification) providing typed feature structures (TFSs), an efficient head scheme based parser, rule indexation mechanisms, a number of devices supporting modularization and configuration of linguistic resources, e.g. an interface format supporting information flow from SGML-encoded data structures to TFSs (thus enabling straightforward integration of 'low-level' processing with deep linguistic analysis), the refinement facility allowing for separating parsing and 'semantic decoration', and the specifier mechanism allowing for multi-dimensional partitioning of linguistic resources into specialized sub-modules.</Paragraph>
    <Paragraph position="1"> For the first time ALEP is used in an industrial context. In the first place, core components of ALEP (parser, feature interpreter, linguistic formalism) are used as the basis of the MELISSA LPM. In the second place, ALEP is used as the development platform for the MELISSA lingware.</Paragraph>
    <Paragraph position="2"> The coverage of the linguistic resources for the first MELISSA prototype was determined by a thorough user needs analysis. The application dealt with was an administrative purchase and acquisition handling system at the Spanish organization of blind people, ONCE.</Paragraph>
    <Paragraph position="3"> The following is an outline of solutions realized in the LPM for text handling, linguistic analysis and semantic representation.</Paragraph>
    <Section position="1" start_page="1193" end_page="1194" type="sub_section">
      <SectionTitle>
2.1 Text Handling
</SectionTitle>
      <Paragraph position="0"> The TH modules for MELISSA (treating phenomena like dates, measures, codes (pro-nr. 123/98-al-T4), abbreviations, but also multiple word units and fixed phrases come as independent Perl pre-processors for pattern recognition, resulting in a drastic improvement of efficiency and a dramatic expansion of coverage.</Paragraph>
      <Paragraph position="1"> Within the general mark up strategy for words a module has been added which allows the treatment of specific sequences of words building units.</Paragraph>
      <Paragraph position="2"> Once those patterns have been recognized and concatenated into one single unit, it is easy to convert them to some code required by the application. Precisely this latter information is then delivered to the grammar for further processing. For one application in MELISSA it is, for example, required to recognize distinct types of proposals and to convert them into numeric codes (e.g.</Paragraph>
      <Paragraph position="3"> '6rdenes de viaje' into the number '2019'.) The TH components allow for an expansion of the coverage of the NLP components. Experiments have already been made in integrating simple POS-tagging components and in passing this information to the ALEP system \[Declerck &amp; Maas 97\]. Unknown words predictable for their syntactic behaviour can be identified, marked and represented by a single default lexical entry in the ALEP lexicon. In one practical experiment, this meant the deletion of thousands of lexieal entries.</Paragraph>
      <Paragraph position="4"> The default mechanism in ALEP works as follows, during parsing ALEP applies the result of lexieal look-up to each of the terminal nodes; if this fails then ALEP will look at lexical entries which contain a default specifier to see whether any of them matches (typically these are underspecifed for string value, but fully specified for syntactic category etc.). Clearly without valency information such an approach is limited (but nevertheless useful). Future work will focus on the (semi) null automatic identification of this information in the pre-processing.</Paragraph>
      <Paragraph position="5"> The modular design of the TH components (distinction of application specific TH phenomena and general ones) allows for a controlled extension to other languages and other applications.</Paragraph>
    </Section>
    <Section position="2" start_page="1194" end_page="1194" type="sub_section">
      <SectionTitle>
2.2 Linguistic Analysis
</SectionTitle>
      <Paragraph position="0"> Based on experiences from previous projects \[Schmidt et al. 96\], mainstream linguistic concepts such as HPSG are adopted and combined with strategies from the &amp;quot;lean formalism paradigm'.</Paragraph>
      <Paragraph position="1"> For MELISSA, a major issue is to design linguistic resources which are transparent, flexible and easily adaptable to specific applications. In order to minimize configuration and extension costs, lingware for different languages is designed according to the same strategies, guaranteeing maximal uniformity. This is realized in semantics. All language modules use the same type and feature system.</Paragraph>
      <Paragraph position="2"> Macros provide an important means of supporting modularity and transparency. They are extensively used for encoding lexieal entries as well as structural rules. Structural macros mostly encode HPSG-like ID schemes spelled out in category-specific grammar rules. Structural macros are largely language-independent, but also lexical macros will be 'standardized' in order to support transparency and easy maintenance.</Paragraph>
      <Paragraph position="3"> The second major issue in linguistic analysis is efficiency of linguistic processing. Efficiency is achieved e.g. by exploiting the lingware partitioning mechanisms of ALEP. Specifier feature structures encode which subpart of the lingware a rule belongs to. Thus for each processing step, only the appropriate subset of rules is activated.</Paragraph>
      <Paragraph position="4"> Efficient processing of NL input is also supported by separation of the 'analysis' stage and one or several 'refinement&amp;quot; stages. During the analysis stage, a structural representation of the NL input is built by a el. grammar, while the refinement stage(s) enriches the representation with additional information. Currently, this is implemented as a two-step approach, where the analysis stage computes purely syntactic information, and the refinement adds semantic information (keeping syntactic and semantic ambiguities separate). In the future we will use further refinement steps for adding application-specific linguistic information.</Paragraph>
    </Section>
    <Section position="3" start_page="1194" end_page="1194" type="sub_section">
      <SectionTitle>
2.3 Semantic Representation
</SectionTitle>
      <Paragraph position="0"> During linguistic analysis, compositional semantic representations are simultaneously encoded by reeursive embedding of semantic feature structures as well as by a number of features encoding distinct types of semantic facts (e.g. predications, argument relations) in terms of a unique wrapper data type, so called 'sf-terms' (SFs). Links be- null tween semantic facts arc established through variable sharings as (2) shows: (i) Elaborate new proposal (2) t sem: {</Paragraph>
      <Paragraph position="2"> The flat list of all SFs representing the meaning of an NL input expression is the input data structure for the SAM.</Paragraph>
      <Paragraph position="3"> Besides predicate argument structure and modification, the semantic model includes functional semantic information (negation, determination, quantification, tense and aspect) and lexical semantics. The SF-encoding scheme carries over to these facets of semantic information as well.</Paragraph>
      <Paragraph position="4"> Special data types which are re, cognized and marked up during TH and which typically correspond to basic data types in the application functionality model, are diacritically encoded by the special wrapper-type 'type', as illustrated in (4) for an instance of a code expression:  (3) proposal of type 2019 (4) t sem:{</Paragraph>
      <Paragraph position="6"/>
    </Section>
  </Section>
  <Section position="4" start_page="1194" end_page="1195" type="metho">
    <SectionTitle>
3 Modelling of Application Knowledge
</SectionTitle>
    <Paragraph position="0"> Two distinct but related models of the host application are required within MELISSA. On the one hand, MELISSA has to understand which (if any) function the user is trying to execute. On the other hand, MELISSA needs to know whether such a functional request can be executed at that instant.</Paragraph>
    <Paragraph position="1"> The basic ontological assumption underpinning each model is that any application comprises a number of functions, each of which requires zero or more parameters.</Paragraph>
    <Section position="1" start_page="1194" end_page="1195" type="sub_section">
      <SectionTitle>
3.1 The SAM Model
</SectionTitle>
      <Paragraph position="0"> The output of the LPM is basically application independent. The SAM has to interpret the semantic output of the LPM in terms of a specific application. Fragments of NL are inherently ambiguous.</Paragraph>
      <Paragraph position="1"> Thus, in general, this LPM output will consist of a number of possible interpretations. The goal of the SAM is to identify a unique function call for the specific application. This is achieved by a (do null main-independent) matching process, which attempts to unify each of the LPM results with one or more so-called mapping rules. Heuristic criteria, embodied within the SAM algorithm, enable the best interpretation to be identified. An example criterion is the principle of 'Maximal Consumption', by which rules matching a greater proportion of the SFs in an LPM result are preferred.</Paragraph>
      <Paragraph position="2"> Analysis of the multiple, application-independent semantic interpretations depends on the matching procedure performed by the SAM, and on the mapping rules. (5) is a mapping rule:  (5) rule(elaborate(3), -- (a) \[elaborate, elaboration, make, create, creation, introduce\], -- (b) \[arg (agent, elaborate, ), arg(theme, elaborate, proposal), mod(concern, proposal, type (proptype ( PropType ) ) ) \] , -- (c)</Paragraph>
      <Paragraph position="4"> Each mapping rule consists of an identifier (a), a list of normalised function-word synonyms (b), a list of SFs (e), and finally, a simple term representing the application function to be called, together with its parameters (d).</Paragraph>
      <Paragraph position="5"> The SAM receives a list of SF lists from the LPM.</Paragraph>
      <Paragraph position="6"> Each list is considered in turn, and the best interpretation sought for each. All of the individual 'best results' are assessed, and the overall best result returned. This overall best is passed on to the FGM, which can either execute, or start a dialogue.</Paragraph>
      <Paragraph position="7"> The SFs embody structural semantic information, but also very important constraint information, derived from the text-handling. Thus in the example rule above, it can clearly be seen that the value of 'PropType&amp;quot; must already have been identified (i.e. during text handling) as being of the type 'proptype'. In particular cases this allows for disambiguation. null</Paragraph>
    </Section>
    <Section position="2" start_page="1195" end_page="1195" type="sub_section">
      <SectionTitle>
3.2 The Application State Model
</SectionTitle>
      <Paragraph position="0"> It is obvious that NL interfaces have to respond in a manner as intelligent as possible. Clearly, certain functions can only be called if the application is in a certain state (e.g. it is a precondition of the function call 'print_file' that the relevant file exists and is printable). These 'application states' provide a means for assessing whether or not a function call is currently permitted.</Paragraph>
      <Paragraph position="1"> A standard application can reasonably be described as a deterministic finite state automaton. A state can only be changed by the execution of one of the functions of the application. This allows for modelling an application in a monotonic fashion and thus calls for a representation in terms of the predicate calculus. From amongst a number of alternatives, the New Event Calculus (NEC) was chosen \[Sadri &amp; Kowalski 95\] as an appropriately powerful formalism for supporting this state modelling. NEC allows for the representation of events, preconditions, postcondifions and time intervals between events. NEC is appropriate for modelling concurrent, event-driven transitions between states. However, for single-user applications, without concurrent functionality, a much simpler formalism, such as, for example, STRIPSlike operators, will be perfectly adequate.</Paragraph>
      <Paragraph position="2"> In terms of implementation methodology, the work to be done is to specify the application specific predicates. The state model of the application contains as components a set of functions which comprise the application, a set of precondmons that must be fulfilled in order to allow the execution of each function, and a set of consequences that results from the execution of a function.</Paragraph>
      <Paragraph position="3"> Both preconditions and consequences are composed of a subset of the set of propositions which comprise the current application state. There exists a set of relations between the components: A function must fulfil preconditions and produces a set of consequences. The set of preconditions iscomposed-of facts. The same holds for the set of consequences and the application state. (6) gives a summary for a simple text editor. ('F' = some file).</Paragraph>
      <Paragraph position="4">  (6) Preconditions: create(F), \[not (exists(F)) \] ).</Paragraph>
      <Paragraph position="5"> open (F) , \[exists (F) , not (open (F)) \] ) .</Paragraph>
      <Paragraph position="6">  close(F), \[exists(F),open(F)\] ).</Paragraph>
      <Paragraph position="7"> delete(F), \[exists(F)\] ).</Paragraph>
      <Paragraph position="8"> edit(F), \[exists (F) ,open(F) \] ).</Paragraph>
      <Paragraph position="9"> save(F), \[exists(F),open(F),modified(F)\] ).</Paragraph>
      <Paragraph position="10"> spell_check(F), \[exists(F) ,open(F) \] ) .</Paragraph>
      <Paragraph position="11"> a) Postconditions: Facts to be added add(create(F), \[exists(F)\] ) .</Paragraph>
      <Paragraph position="12"> add (open (F) , \[open (F) \] ) .</Paragraph>
      <Paragraph position="13"> add(close(F), \[\] ).</Paragraph>
      <Paragraph position="14"> add(delete(F), \[\] ).</Paragraph>
      <Paragraph position="15"> add (edit (F), \[modified (F) \] ) .</Paragraph>
      <Paragraph position="16"> add(save(F), \[saved(F)\] ) .</Paragraph>
      <Paragraph position="17"> add(spell_check(F), \[modified(F)\] ).</Paragraph>
      <Paragraph position="18"> b) Postconditions: Facts to be deleted del(create(F), \[\] ).</Paragraph>
      <Paragraph position="19"> del(open(F), \[\] ).</Paragraph>
      <Paragraph position="20"> del (close (F), \[open (F) \] ) .</Paragraph>
      <Paragraph position="21"> del (delete (F) , \[exists (F) \] ) .</Paragraph>
      <Paragraph position="22"> del (edit (F), \[\] ).</Paragraph>
      <Paragraph position="23"> del (save (F), \[modified (F) \] ) .</Paragraph>
      <Paragraph position="24"> del(spell_check(F), \[\] ) .</Paragraph>
      <Paragraph position="25"> A simple planner can be used to generate remedial suggestion to the user, in eases where the desired function is currently disabled.</Paragraph>
    </Section>
  </Section>
  <Section position="5" start_page="1195" end_page="1196" type="metho">
    <SectionTitle>
4 Adopted Solutions
</SectionTitle>
    <Paragraph position="0"/>
    <Section position="1" start_page="1195" end_page="1196" type="sub_section">
      <SectionTitle>
4.1 Standardisation and Methodologies
</SectionTitle>
      <Paragraph position="0"> Throughout the design phase of the project an object oriented approach has been followed using  the Unified Modelling Language \[Beech et al. 97\] as a suitable notation. It is equally foreseen to actually propose an extension to this standard notation with linguistic and knowledge related aspects. This activity covers part of the 'Methodology and Standards' aspects of the project. Other activities related to this aspect are concerned with 'knowledge engineering', 'knowledge modelling', and 'language engineering' (e.g. linguistic coverage analysis). Methodologies are being developed that define the steps (and how to carry them out) from a systematic application analysis (a kind of reverse-engineering) to the implementation of a usable (logical and physical) model of the application. This model can be directly exploited by the MELISSA software components.</Paragraph>
    </Section>
    <Section position="2" start_page="1196" end_page="1196" type="sub_section">
      <SectionTitle>
4.2 Interoperability
</SectionTitle>
      <Paragraph position="0"> As stated in the introduction, CORBA \[Ben-Natan 1995\] is used as the interoperability standard in order for the different components to co-operate.</Paragraph>
      <Paragraph position="1"> The component approach, together with CORBA, allows a very flexible (e.g. distributed) deployment of the MELISSA system. CORBA allows software components to invoke methods (functionality) in remote objects (applications) regardless of the machine and architecture the called objects reside on. This is particularly relevant for calling functious in the 'hosting' application. The NL input processing by the MELISSA core components (themselves communicating through CORBA) must eventually lead to the invoking of some function in the targeted application. In many cases this can be achieved through CORBA interoperability techniques (e.g. object wrapping).</Paragraph>
      <Paragraph position="2"> This approach will enable developers to provide existing (legacy) applications with an NL interface without having to re-implement or reverse engineer such applications. New applications, developed with components and distributed processing in mind, can integrate MELISSA components with little development effort.</Paragraph>
    </Section>
    <Section position="3" start_page="1196" end_page="1196" type="sub_section">
      <SectionTitle>
4.3 Design and Implementation
</SectionTitle>
      <Paragraph position="0"> The software design of all components has followed the object-oriented paradigm. The SRM for example is implemented based on a hierarchical collection of classes. These classes cover for instances software structures focused on speech recognition and distributed computing using CORBA. In particular the speech recognition classes were implemented to be independent of various speech recognition programming interfaces, and are expandable. Vocabularies, dictionaries and user specific settings are handled by specific classes to support the main speech application class. Commands can easily be mapped to the desired functionality. Speech recognition resuits are stored in conjunction with scores, confumed words and their alternatives. Other MELISSA components can access these results through CORBA calls.</Paragraph>
    </Section>
  </Section>
class="xml-element"></Paper>
Download Original XML