File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/91/w91-0116_metho.xml
Size: 34,713 bytes
Last Modified: 2025-10-06 14:12:46
<?xml version="1.0" standalone="yes"?> <Paper uid="W91-0116"> <Title>A :SEMANTIC INTERPRETER FOR SYSTEMIC GRAMMARS</Title> <Section position="1" start_page="0" end_page="0" type="metho"> <SectionTitle> A :SEMANTIC INTERPRETER FOR SYSTEMIC GRAMMARS </SectionTitle> <Paragraph position="0"> Tim F. O'Donoghue t immy@uk, ac. leeds, ai</Paragraph> </Section> <Section position="2" start_page="0" end_page="131" type="metho"> <SectionTitle> ABSTRACT </SectionTitle> <Paragraph position="0"> This paper describes a method for obtaining the semantic representation for a syntax tree in Systemic Grammar (SG). A prototype implementation of this method -- the REVELATION1 semantic interpreter -- has been ldeveloped. It is derived from a SG generator foe a large subset of English --GENESYS -- and is !thus, in contrast with most reversible grammars, fan interpreter based on a generator. A task decomposition approach is adopted for this reversal process which operates within the framework of SG, thus demonstrating that Systemic Grammars can be reversed and hence that a SG is a truly bi-directional formalism, p Introduction SG (see Butler \[4\] for a good introduction) is a useful model of lan~guage, having found many applications in the areas of stylistics, text analysis, educational linggistics and artificial intelligence. Some of these applications have been colnputational, the best known probably being Winograd's SHRDLU \[22\]. Howdver, most computational applications have been designed from a text-generation viewpoint (such as Davey's PROTEUS \[6\], Manta and Matthiessen's NmEL \[16, 17\] and Fawcett and rlhtcker's C/ENESYS \[!0\]).</Paragraph> <Paragraph position="1"> Because of this text-generation viewpoint of systen tic grammarians, the mechanism for sentence analysis within SG (the reverse of the sentence generation process) h~ received much less attention. This paper describes one stage of the sentence analysis process: semantic interpretation.l 1 This assumes that Isentence analysis can be decomposed into two processes: syntactic analysis (parsing) plus semantic In Fawcett's SG, 2 a syntax tree (whose leaves define a sentence) is generated from a set of semantic choices. R.EVELATION1 reverses this process: it attempts to find the set of semantic choices needed to generate a given syntax tree. In sentence generation, a tree is generated, but only the leaves (the words) are 'output', the rest of the tree is simply deleted. In tile reverse process, when a sentence is input, a syntax tree must first be found before it can be interpreted. REVELATION1 assumes that a separate SG parser (not discussed here) is available; for an example of such a parser see O'Donoghue \[20\].</Paragraph> <Paragraph position="2"> Thus REVELATION1 directly mirrors the generator, while the parser mirrors tlle tree deletion process.</Paragraph> <Paragraph position="3"> REVELATION1 has been developed within the PoPI.OG environment, a It is coded in a combination of PoPll \[1\] and Prolog \[5\] and utilizes GENESYS. GENESYS is a very large SG generator for English, written in Prolog (Fawcett and Tucker \[10\]) and it is version PG1.5 that has been used for the development and testing of REVELATIONI. GENESYS and REVELATION are part of a much larger project, the COMMUNAL 4 Project, the aim of which is to build a natural language interface to a rich SG oriented IKBS in tim domain of personnel manageinterp|~etation. These processes are not necessarily sequential, although it greatly simplifies things if they are treated as such. tlere I assume a sequential scheme in which the parsing process passes a syntax tree to the interpreter.</Paragraph> <Paragraph position="4"> This is not the first time a semantic interpreter has been attempted for a large SG: Kasper \[13\] has developed a sentence analysis method (both parsing and interpretation together in my terminology) based on the NIGEL grammar. In his approach the SG is compiled into a Functional Unification Grammar (FUG) (see Kay \[14\], a representation language with some systemic links) and then existing (but extended) FUG parsing techniques are used to find a syntax tree plus an interpretation for a sentence.</Paragraph> <Paragraph position="5"> The REVELATION1 approach differs from this compilation method since the interpretation is achieved within a systemic framework. No other SG-based model (to my knowledge) has been used for both generation and interpretation in this way.</Paragraph> <Section position="1" start_page="129" end_page="131" type="sub_section"> <SectionTitle> Systemic Grammar </SectionTitle> <Paragraph position="0"> Fawcett's SG is a meaning-oriented model of language in which there is a large 'and/or' network of semantic features, defining the choices in meaning that are available. A syntax tree is generated by traversing multiple paths (see later) through this network, making choices in meaning, and so defining the meaning of the syntax tree to be generated. These choices fire realization rules which specify tile structural implication of choosing certain features; they map the semantic choices onto syntactic structures and so transform the chosen meaning into a syntax tree (and hence a sentence) which expresses that meaning.</Paragraph> <Paragraph position="1"> As an example of the syntax trees which are gem erated by Pol.5, consider Figure 1. Systemic syntax trees consist of a number of levels of structure called units; the tree in Figure 1 has four: one clause (CI), two nominal groups (ngp) and a quantity-quality group (qqgp). The components (immediate constituents) of each unit are elements of structure labelled to represent the functions fulfilled by that component with respect to its unit. For example: subject (S), main verb (M), second complement (C2) and ender (E) in the clause, superlative determiner (ds) and head (h) in the nominal groups, superlative deictic determiner (dds) and apex (a) in the quantity-quality group. Some items may expound more than one function in a unit, e.g. &quot;is&quot; flmctions both as operator (O) and period-marking auxiliary (Xpd) in the clause and is labelled with the conflated functional label O/Xpd. Some elements may be conflated with participant roles, e.g.</Paragraph> <Paragraph position="2"> &quot;who&quot; is a subject playing the role of agent (At) and so is labelled S/Ag. Similarly &quot;the big+est one&quot; is a complement playing the role of affected (Af) and hence is labelled C2/Af. The immediate constituent of an element of structure is either a lexical item (either a word or punctuation, in which case we say that the item expounds the element, e.g.</Paragraph> <Paragraph position="3"> the lexical item &quot;one&quot; expounds h), or a further unit (when we say a unit fills the element, e.g. the unit qqgp fills ds).</Paragraph> <Paragraph position="4"> Ilaving introduced the type of syntax tree that is generated, let us now consider the actual process of generation. The key concept in SG is that of choice between small sets of meanings (systems of semantic features). For example, the NUMBER system contains tile choices singular and plural. The choiee systems in a systemic grammar are linked together by 'and' and 'or' relationships to form a complex system network, specifying the preconditions and consequences of choosing features. Consider the system network presented in Figure 2 (an excerpt from eo 1.5 which contains ~450 systems, some containing many more features than the binary systems illustrated in this example). In the systemic notation curly braces represent conjunctions and vertical bars represent exclusive disjunctions, i.e. choice. The upper-case labels are the names of systems and</Paragraph> <Paragraph position="6"> directive information, information, information, information, information, information, information, retros )strive, not-expect ..................................................................... &quot;touch&quot; retrol mctive, expect, immediate-expect ................................... &quot;has beenahout to touch&quot; retros ~sctive, expect, unmarked-expect .................................... &quot;has heengoing to touch&quot; not-retrospective, expect, immediate-expect, past-from-expect ......... 'qsabout to have touched&quot; not-retrospective, expect, immediate-expect, not-past-from-expect ............ 'qsahout to touch&quot; not-retrospective, expect, unmarked-expect, past-from-expect ........... &quot;is going to have touched&quot; not-retrospective, expect, unmarked-expect, not-past-from-expect .............. &quot;is going to touch'</Paragraph> <Paragraph position="8"/> <Paragraph position="10"> if period-marked then Xpd <+ &quot;en&quot; else if unmarked-passive then Xp <+ &quot;en&quot;.</Paragraph> <Paragraph position="11"> the lower-case labels are the names of the features in those systems. Each sjrstem has an entry condition; a precondition which must be met in order to enter the system and make a choice. For example, to enter the RETR0 (retrOspectivity) and EXPECT (expectation) systems, information must have been chosen. To enter the PFE (past from expectation) system, both not-ret~ospective and expect must have been chosen. The :sets of choices in meaning defined by this network fr:agment are listed in Figure 3. Each set of choices is a selection expression, i.e.</Paragraph> <Paragraph position="12"> a path (typically bifurcating) through the network.</Paragraph> <Paragraph position="13"> Associated with certain features are realization rules. In Figure 2, the bracketted numbers are pointers to realization rules;i thus realization rule 15.7 is triggered by the feature past-from-expect. A realization rule specifies the structural consequences of choosing a feature; they map the semantic choices onto syntactic structures. Often conditions are involved; consider the realization rules shown in Figure 4 (PG1.5 contains ~500 realization rules, many of which are far more complex than these examples).</Paragraph> <Paragraph position="14"> The main types of rule are: * Components Rules: ElaN, stating that the element E1 is at place II in the current unit (thus placing an ordering on the components of the unit currently being generated). These place numbers are relative rather than physical; an element at place S states that the element (if realized) will appear after elements whose places are less than N and before those elements whose places are greater than 1I.</Paragraph> <Paragraph position="15"> Filling Rules: is_filled_by U, defining the unit U to be generated, e.g. U=ngp.</Paragraph> <Paragraph position="16"> . Conflation Rules: F2 by F1, stating that the two functions F1 and F2 are conflated with one another in the current unit (e.g. a Sub-ject which also functions as an Agent).</Paragraph> <Paragraph position="17"> . Exponence Rules: El<Word, stating that the element El is expounded by an item (e.g.</Paragraph> <Paragraph position="18"> N<&quot;open&quot;), i.e. exponents creates terminal constituents.</Paragraph> <Paragraph position="19"> * Re-entry Rules: for F re_enter_at f, stating that the function F is filled by a unit which is generated by re-entering the network atthe feature :t.</Paragraph> <Paragraph position="20"> * Preference Rules: for F prefer If...\], stating that when re-entering to generate a unit to fill the function F, the features \[:f... \] should be preferred, either absolutely (i.e.</Paragraph> <Paragraph position="21"> 'pre-selection') or tentatively (expressed as a percentage).</Paragraph> <Paragraph position="22"> A sentence is generated by generating the syntax tree for the sentence, the leaves in this tree being the words of the sentence. The tree is generated by generating each of its units in a top-down fashion. Each unit is generated by a single pass through the system network. This pass is expressed as a selection expression which lists the features chosen on that pass. For example, suppose we wanted to generate the sentence &quot;the prisoner was going to have been killed&quot;. The selection expression for the clause would contain the features (plus many others, of course): information, not-retrospective, expect, unmarked-expect, past-from-expect.</Paragraph> <Paragraph position="23"> Any realization rules associated with features in the selection expression are then executed. Rules 15.5 and 15.7 in Figure 4 generate a structure whose leaves are &quot;... going to ... have ... (be)en ...&quot;. For example, to generate the clause structure in Figure 1, the following realization statements were executed: null is_filled_by C1 S@35, 0@37, M@94, C2@I06, E@200 hg by S, Xpdby 0, Af by C2 O<&quot;is&quot;, N<&quot;open&quot;, M<+&quot;+ing&quot;, E<&quot;?&quot; for Ag re_enter_at stereotypical_thing for Af re_enter_at thing The re-entry realization statements show which functions are to be filled by re-entering the system network to generate further units. Re-entry can be thought of as a recursive function call which generates a lower layer of structure in the tree -- but typically with some of the choices 'preferred' via the preference realization statement. Thus the syntax tree in Figure 1 is generated with four passes: the clause is generated first, followed by the nominal group filling the subject agent, followed by the nominal group filling the affected complement, and finally the quantity-quality group filling the superlative determiner.</Paragraph> <Paragraph position="24"> The information required to generate a syntax tree can be expressed as a tree of selection expressions; this is the semantic representation for the sentence. Each node in the semantic representation corresponds to a unit in the syntax tree and is labelled by the selection expression for that unit. For example, a semantic representation of the form shown in Figure 5 is needed to generate the syntax tree in Figure 1.</Paragraph> </Section> <Section position="2" start_page="131" end_page="131" type="sub_section"> <SectionTitle> Interpreting a Syntax Tree </SectionTitle> <Paragraph position="0"> Given a syntax tree, the aim of interpretation is to find the semantic representation which would generate that tree. This semantic representation includes all the features that are needed to generate the tree and so defines the 'meaning content' of the syntax tree.</Paragraph> <Paragraph position="1"> In the process of generation, a syntax tree is generated by generating its units; in the process of interpretation a syntax tree is understood by interpreting all of its units in a precisely analogous way. Thus a unit interpretation is the selection expression which generated that unit. In general there can be more than one selection expression for any given unit, since the same syntactic structure can have more than one meaning, just as a whole sentence can have more than one meaning. The potential unit interpretations are defined by constructing an AO tree ~ whose goal (root) is to prove unit realization, i.e. prove that the unit can be generated. Each potential solution of this AO tree defines a poten-</Paragraph> </Section> </Section> <Section position="3" start_page="131" end_page="137" type="metho"> <SectionTitle> 5 AO (AND/OR) trees provide a means for describing task </SectionTitle> <Paragraph position="0"> decomposition. These structures were first proposed by Slagle \[21\] and have since been used in a variety of applications including symbolic integration and theorem proving (Nilsson \[18\] lists a number of applications with references). The AO tree notation used throughout this paper is illustrated in Figure 6 with leaves -- terminal tasks which cannot be decomposed any further -- being represented as bold nodes.</Paragraph> <Paragraph position="1"> tial selection expression for the unit. The semantic representation for the tree is given by some feasible combination of the :potential selection expressions for each unit in the: syntax tree. Not all combinations of selection expressions are feasible since the generation passes, alnd hence the selection expres- i sions, are interdepelident. Thus: * A pass is dependent upon previous, higher passes througl~ the use of the unary boolean operators on_.p~ev_pass and on_first_pass in conditions inside realization rules. These operators are usdd to test the values of features in passes for ~igher units in the syntax tree.</Paragraph> <Paragraph position="2"> For example !the condition on..:first_l~ass written is only true if uritten is selected on the first pass.</Paragraph> <Paragraph position="3"> * Subsequent passes are dependent upon the pre-sclections that are made with the preference rules. For example, when generating a que:stion (such as that in Figure I) it is n#cessary to pre-select the feature seeking-'specification-of-thing for the nominal group which fills the subject agent. This t ensures that a Wh-subject is generated. null The first step in unit interpretation is the decomposition of the :unit's structure into a set of descriptors; 6 each describing a different aspect of the unit's structure. For example, the descriptors required to describe the clause in Figure 1 are:</Paragraph> <Paragraph position="5"> ization; it attempts to ~apture the effect of the realization statements without using any of their syntax.</Paragraph> <Paragraph position="6"> re_enter(S,Ag), re_enter(C2,Af) Tile unit descriptor specifies the name of the current unit, the el descriptor specifies the ordering of the component elements, the C/on:f descriptor specifies the conflation relationships, the stem and suffix descriptors specify the items expounding lexical elements, and the re_enter descriptor specifies the non-terminal elements requiring a subsequent pass to generate a unit to fill them.</Paragraph> <Paragraph position="7"> Decomposition 1 \[Unit Interpretation\] Unit interpretation is achieved by proving that the unit can be realized. To do this, the unit's structure is described, using a set of descriptors, and hence unit realization is decomposed into a number of separate realizations, with one realization for each descriptor.</Paragraph> <Paragraph position="8"> Thus unit interpretation is achieved by realizing all descriptors (hence realizing the whole unit). This decomposition is illustrated in Figure 7.</Paragraph> <Paragraph position="9"> Each descriptor is realized by some realization statement (which will appear somewhere in the realization rules). There is a simple mapping between descriptors and suitable realization statements. For some descriptors there is only a single suitable realization statement: for example, unit(U) is only realized by the statement is_filled_by O. Other descriptors can be realized by a number of different realization statements: for example, el(i,Eli) is realized by statements of the form EliQN, where the place N is greater than the places for Eli_i and less than the places for Eli+l. This ensures correct ordering of the constituent elements.</Paragraph> <Paragraph position="10"> For each descriptor, a search of the realization ruh,s is performed to find any statements which realize the descriptor. For example, for the descriptor unit(el) we would search for all occurrences of is_filled.by Cl in the realization rules. After the search is complete there will be a set of potentially active rules for each descriptor, with each potentially active rule containing a suitable statement (or possibly more than one suitable statement) that</Paragraph> <Paragraph position="12"> would realize the descriptor. Associated with each potentially active rule is a set of preconditions, one precondition for each suitable statement. (If the statement has no precondition, then the set of pre-conditions will be empty.) Decomposition 2 \[Descriptor Realization\] Proving that a descriptor can be realized is decomposed into (i) activating any one of its potentially active rules, and (ii) proving that any one of the preconditions associated with that rule holds true (assuming there are any preconditions). This decomposition is illustrated in Figure 8.</Paragraph> <Paragraph position="13"> What is required for a rule to be active? A rule can only be active if there is a feasible path (feasible in the sense that all features on that path can be selected) through the system network to a feature associated with that rule. There will be at least one potential path to each rule; 'at least' is required since a rule can have more than one potentim path if (i) it is associated with more than one feature or (ii) it is dependent upon a disjunctive entry condition. Thus the potential paths to a rule can be represented as a boolean expression; i.e. as an exclusive disjunction of conjunctions: pathl xor path2 xor ... xor pathN where the path components are conjunctions composed from the features in each potential path and the exclusive disjunction represents choice between potential paths. This expression can be simplified into an expression of the form: common and (variantl xor ... xor variantN) where common is a conjunction of features that is common to all potential paths and variantl *.. variantN are the conjunctions of features peculiar to each potential path* This expression can be considered as a precondition for rule activation and so must be true for the rule to be active.</Paragraph> <Paragraph position="14"> Decomposition 3 \[Rule Activation\] Rule Activation is achieved by activating one of the potential paths to that rule. The potential paths can be decomposed into a common component and a number of variant components. One potential path is active if and only if the common component is active and its variant component is also active. This decomposition is illustrated in Figure 9.</Paragraph> <Paragraph position="15"> At this stage in the decomposition all tip nodes are problems of the form 'boolean expression = T', i.e. satisfiability problems. In order to define how satisfiability problems are solved, the concept of a truth function must first be introduced. A truth function is a mapping between features and a three valued logic (with truth values false, true and undefined) which defines the value of each feature; true indicating selected and false indicating not selected* Tile problem of satisfiability for a boolean expression involves finding a truth function such that the boolean expression evaluates ? to be true, such a truth function is called a satisfying truth function for tile boolean expression. Unfortunately, this is all intractable problem since a disjunction with n disjuncts can have 3&quot;-1 potential satisfying truth functions, i.e. the task is exponential in problem size (in fact, the problem of satisfiability has the honor of being the first NP-Complete problem; see Garey and Johnson \[11\]). As is the case with inherently intractable problems; it is not worth searching for an efficient, exact algorithm to perform the task; it is 7Evaluation is performed with respect to Kleene's three valued logic \[15\].</Paragraph> <Paragraph position="17"/> <Paragraph position="19"> more appropriate to:consider a less ambitious problem. Consider the problem of partial satisfiability: it is essentially the salfie as satisfiability except that it does not attempt to satisfy disjunctive components (since it is these components which make the satisfiability problem 'hard'). It requires a redefinition of a truth function: in partial satisfiability a truth function is a mappin~g between boolean expressions (rather than simply features) and truth values. For example, the expresSSion (a or b) and C/ and not d is partially satisfied by the fimction v: v : v(a or b)=T, v(c)=T, v(d)=F Since disjunctions are effectively ignored, there is a single unique partially satisfying truth function for any boolean expression. Thus tip nodes labelled with satisfiability problems are (partially) decomposed into a numbei, of and-nodes; each and-node being labelled by a feature (possibly negated) or a disjunction which must be true. As an example of a fldly decomposed AO tree for unit interpretation, consider the skeleton tree in Figure 10 which defines the potential interpretations of the clause in A potential solution of an AO tree is specified by a subset of the leaves that are necessary for the root task to be achieved. A backtracking search is used to generate potential solutions: since the AO tree is finite, it is possible to inspect the structure of the tree and order the search in such a way so that the minimum amount of backtracking is performed, s A feasible interpretation for a unit is one of the potential solutions in which the statements labelling the leaves are all consistent. There are two ways in which a leaf statement may be inconsistent: s The search scheme involves incrementally evaluating and pruning the AO tree. Full details can be found in O'Donoghue \[19\], an in-depth report on the interpreter.</Paragraph> <Paragraph position="20"> Selection Expression for CI filling Z I sitttation, congru.entt-sit, in.f?rrnatlon, seeker, no.cor~-modMity, I perloo-rnarkeaj alng\[e-ou~l(ler-sth, matsri3|, two-role.process, I agent-c~ntre(l.trp, plUS-Sllecte~, ~tl~ent.~-th~me-mat.aou\[ht, I oVert.~ffected:a.U, ogerl.~Hected.s~l.pr,~sentecl~ qver tC/ affe~Yed, af- u nt hernatised, l c~nglng-colangujr&tion&J-st at e, c nangtng- posit Lone&J- con figuration , l cnanglng.state-ol.openness, tm &k e. opt n ,~.ge nt. ;5- t h e rne- a.f&quot; J Selection Expression f(~r ngp fillings Selection Expllession for ng p filling C2 * The statements logically contradict, for example: leaves labelled a = T and a = F. Or b or e=Tandb=F,e=F.</Paragraph> <Paragraph position="21"> * The statements systemically contradict, for example: a = T and b = T when a and b are members of the same choice system (from which at most one feature can be chosen).</Paragraph> <Paragraph position="22"> The leaf statements in a feasible solution define which features must be true (i.e. selected) for the unit to be realized, i.e. they specify a selection expression for the unit.</Paragraph> <Section position="1" start_page="135" end_page="137" type="sub_section"> <SectionTitle> Results and Discussion </SectionTitle> <Paragraph position="0"> The semantic representation found by REVELATION 1 for the syntax tree in Figure 1 is presented in Figure 11. We can get a flavor of the semantic representation by identifying key features: the sentence is a question (information, seeker) about an on-going event (period-marked) which involves opening something (make-open). It is a person that we are seeking (person-sst). The thing that is being opened is selected by superlativization (in fact it is the biggest) and it is recoverable from the previous discourse (the &quot;one&quot; referring to something that has previously been mentioned).</Paragraph> <Paragraph position="1"> Unfortunately this semantic representation is incomplete. One of the factors contributing to this incompleteness is that of 'unrealized' selections: these are features which have no associated realization (e.g. not-expect in Figure 2). Consider tile way in which interpretation works: it tries to prove that observed realizations have taken place and in so doing infer the features that were selected. However, if a realization does not take place as a (not necessarily direct) result of a selection, then there is no way to infer that the unrealized feature was selected. Consider the POLARITY system where there is a choice between positive and negative. The positive choice is unrealized where as negative is associated with the realization rule: 17:negative: do_support, 0 <+ &quot;n't&quot;.</Paragraph> <Paragraph position="2"> Suppose we have a positive clause, &quot;Who is opening the biggest one?&quot; (the example sentence from Figure 1). There is no way to tell from the structure of the clause that the sentence is positive. However by inspecting the realization rule associated with negative we find that the (unconditional) statement 0<+&quot;n't&quot; can never be active as this would realize the sentence &quot;Who isn't opening the biggest one?&quot;. Thus rule 17 can never be active and so negative can't be chosen, hence positive must be chosen. Thus by a process of eliminating features which cannot be true, it is possible to determine unrealized features. REVELATION2 (currently under development) will attempt to implement this process of elimination by moving forward through the system network (after a partial semantic representation has been obtained) systematically verifying any realization rules that it meets, eliminating features that cannot be chosen and so possibly inferring something about unrealized features which need to be chosen.</Paragraph> <Paragraph position="3"> The other factor contributing to incompleteness is the definition of partial satisfiability. Some features in the network do not have realization rules attached to them: typically these appear as conditions on statements in realization rules. However, if any of these features appear in disjunctive conditions or disjunctive entry conditions to systems, then nothing can be inferred about their values since the definition of partial satisfiability ignores all dis- null junctions. This problem can only be overcome by searching for (exact) satisfying truth functions from which all feature values can be inferred. REVELATION2 attempts to solve this problem by deferring the search for exact satisfying truth functions until after a partial in,terpretation has been obtained: the partial interpretation is obtained as for REVE-LATION1, and then any disjunctions that have been ignored while obtaining this interpretation are exactly satisfied to try and fill the gaps in the partial interpretation.</Paragraph> <Paragraph position="4"> In addition to the work on the next generation of interpreter, some work is being carried out on P(~ 1.5 to make it more efficient; it is being tuned for interpretation by simplifying and normalizing conditions in the realization rules. This involves 'tightening up' the conditions by su~bstituting xor for or wherever possible and reducin'g the scope of any valid disjunctions that remain (eig. a and (b or c) rather than a and b or a and c).</Paragraph> <Paragraph position="5"> Clearly efficiency is a problem, since the AO trees explode into or-nod:es, through the use of (i) disjunctive entry conditions in the system network and (ii) disjunctive conditions in the realization rules. It \[ has been proved (Brew \[3\]) that systemic classification is NP-Hard and is thus inherently intractable. This led to the choice of partial satisfiabillty rather than exact satisfiablility (which itself is NP-Ilard) in REVELATIONI, and the development of an efficient technique for isearching the AO trees which utilizes incremental ievaluation and pruning to reduce the number of backtracking points (full details in O'Donoghue \[19\])i The delayed use of exact satisfiability being inves!igated in REVELATION2 is similar to Brew's 'partial' algorithm for checking systemic descriptions. IBrew's checking algorithm has two stages, the first i being a simplification stage in which all disjunctive:entry conditions are eliminated from the system netivork by replacing them with a uniquely generated feature. The resultant simplified network can then be searched efficiently. The second stage involves checking all of the features generated in the first stage, each generated feature referring to a disjunctive entry c~ondition. IIere also we find a delaying tactic in which disjunctions are satisfied as late as possible in the search.</Paragraph> <Paragraph position="6"> Although no theoretical calculations as to the complexity of the REVELATION1 method and Pc, l.5 have been nndertaken, we can get a feel for the scale of the problem by considering the amount of CPU time that was needed to obtain the semantic representation of Figure 1!1. On a SPARCstation 1 with 16M this task required ,~25 CPU seconds, with this time being halved on a Sun 4/490 with 32M. When reading these timings bear in mind that REVELATION1 is coded in POPLOG languages which are incrementally compiled into an interpreted virtual machine code.</Paragraph> <Paragraph position="7"> Conclusions REVELATION1 has demonstrated that Fawcett's SG is a bi-directional formalism, although in the case of PG 1.5, some reorganization was required to make it run ill reverse. The main problem in reversing a SG seems to be that systemic grammars are written from a predominantly text-generation viewpoint. In developing their grammars systemic grammarians are concerned with how the grammar will generate rather than its suitability for interpretation. For instance, special care ought to be taken when expressing conditions in realization rules: writing a xor b rather than a or b can be a godsend to an interpreter. Similarly, simplifying and normalizing conditions so that they are as simple and specific as possible is a great aid to interpretation. It reduces the search space and hence speeds up interpretation -- even though, from a generative point of view, it may make more sense to express a condition in a 'long winded' fashion that captures the generalization the linguist is attempting to make.</Paragraph> <Paragraph position="8"> Fawcett states (private communication) that in developing his version of SG (which is different in a number of ways -- especially in the realization component -- from the NIGEL grammar of Mann and Matthiessen) he always had in mind its potential for reversibility. Perhaps the surprising thing is not that modifications in GENESYS are indicated by work on REVELATION, but how few modifications appear to be required. Clearly, the need now is for close collaboration between the builders of the successor versions of GENESYS and and the successor versions of REVELATION. Current research has precisely this goal; and we shall report the results in due course.</Paragraph> <Paragraph position="9"> REVELATION1 combined with Fawcett's SG appears to be a step in the right direction towards a hi-directional systemic grammar. REVELATION2 may take a step closer. But a bi-directional systemic grammar will only be achieved when interpretationminded people and generation-minded people get together and collaborate in developing such a grammar. null</Paragraph> </Section> </Section> class="xml-element"></Paper>