File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/79/p79-1010_metho.xml
Size: 30,616 bytes
Last Modified: 2025-10-06 14:11:17
<?xml version="1.0" standalone="yes"?> <Paper uid="P79-1010"> <Title>Semantics of Conceptual Graphs</Title> <Section position="2" start_page="0" end_page="39" type="metho"> <SectionTitle> 2. Conceptual Graphs </SectionTitle> <Paragraph position="0"> The following conceptual graph shows the concepts and relationships in the sentence &quot;Mary hit the piggy hank with a hammer.&quot; The boxes are concepts and the circles are conceptual relations. Inside each box or circle is a type label that designates the type of concept or relation. The conceptual relations labeled AONI&quot;. INST. and PTNT represent the linguistic cases agent, instrument, and patient of case grammar.</Paragraph> <Paragraph position="1"> PERSON: Mary Conceptual graphs are a kind of semantic network. See Findler (1979) for surveys of a variety of such networks that have been used in AI. The diagram above illustrates some features of the conceptual graph notation: * Some concepts are generic. They have only a type label inside the box, e.g. mT or HAMMEa * Other concepts are individuaL They have a colon after the type label, followed by a name (Mary) or a unique identifier called an individual marker (i22103).</Paragraph> <Paragraph position="2"> To keep the diagram from looking overly busy, the hierarchy of types and subtypes is not drawn explicitly, but is determined by a separate partial ordering of type labels. The type labels are used by the formation rules to enforce selection constraints and to support the inheritance of properties from a supertype to a subtype.</Paragraph> <Paragraph position="3"> For convenience, the diagram could be linearized by using square brackets for concepts and parentheses for conceptual relations: \[ PERSON:Mary\]-.~ AGNT)-~( HIT:c I \]~--4 INST).~-(HAMMEI~.\] \[HIT:c I \]4--( PTNT).~---\[P\[ GO Y-B A NK:i22 I03\] Linearizing the diagram requires a coreference index, el, on the generic concept HiT. The index shows that the two occurrences designate the same act of hitting. If mT had been an individual concept, its name or individual marker would be sufficient to indicate the same act.</Paragraph> <Paragraph position="4"> Besides the features illustrated in the diagram, the theory of conceptual graphs includes the following: * For any particular domain of discourse, a specially designated set of conceptual graphs called the canon, * Four canonical formation rules for deriving new canonical graphs from any given canon, * A method for defining new concept types: some canonical graph is specified as the differentia and a concept in that graph is designated the genus of the new type, * A method for defining new types of Conceptual relations: some canonical graph is specified as the relator and one or more concepts in that graph are specified as parameters, * A method for defining composite entities as structures having other entities as parts, * Optional quantifiers on generic concepts, * Scope of quantifiers specified either by embedding them inside type definitions or by linking them with functional dependency arcs, * Procedural attachments associated with the functional dependency arcs, * Control marks that determine when attached procedures should be invoked.</Paragraph> <Paragraph position="5"> These features have been described in the earlier papers; for completeness, the appendix recapitulates the axioms and definitions that are explicitly used in this paper.</Paragraph> <Paragraph position="6"> Heidorn's (1972, 1975) Natural Language Processor (NLP) is being used to implement the theory of conceptual graphs. The NLP system processes two kinds of Augmented Phrase Structure rules: decoding rules parse language inputs and create graphs that represent their meaning, and encoding ru/es scan the graphs to generate language output. Since the NLP structures are very similar to conceptual graphs, much of the implementation amounts to identifying some feature or combination of features in NLP for each construct in conceptual graphs. Constructs that would be difficult or inefficient to implement directly in NLP rules can be supported by LISP functions. The inference algorithms in this paper, however, have not yet been implemented.</Paragraph> </Section> <Section position="3" start_page="39" end_page="41" type="metho"> <SectionTitle> 3. Log/caJ Connect/yes </SectionTitle> <Paragraph position="0"> Canonical formation rules enforce the selection constraints in linguistics: they do not guarantee that all derived graphs are true, but they rule out semantic anomalies. In terms of graph grammars, the canonical formation rules are contextfree. This section defines logical operations that are contextsensitive, They enforce tighter constraints on graph derivations, but they require more complex pattern matching. Formarion rules and logical operations are complementary mechanisms for building models of possible worlds and checking their consistency, Sowa (1976) discussed two ways of handling logical operators in conceptual graphs: the abstract approach, which treats them as functions of truth values, and the direct approach, which treats implications, conjunctions, disjunctions, and negations as operations for building, splitting, and discarding conceptual graphs. That paper, however, merely mentioned the approach; this paper develops a notation adapted from Oantzen's sequents (1934), but with an interpretation based on Beinap's conditional assertions (1973) and with computational techniques similar to Hendrix's partitioned semantic networks (1975, 1979). Deliyanni and Kowalski (1979) used a similar notation for logic in semantic networks, but with the arrows reversed.</Paragraph> <Paragraph position="1"> Definition: A seq~nt is a collection of conceptual graphs divided into two sets, called the conditions ut ..... Un and the anergons vt,...,v,,, It is written Ul,...,Un &quot;* vl,...,Vm. Several special cases are distinguished: * A simple assertion has no conditions and only one assertion: -.. v.</Paragraph> <Paragraph position="2"> * A disjunction has no conditions and two or more assertions: ..m. PI,...,Vm.</Paragraph> <Paragraph position="3"> * A simple denial has only one condition and no assertions: u -....</Paragraph> <Paragraph position="4"> * A compound denial has two or more conditions and no assertions: ut,...,un -...</Paragraph> <Paragraph position="5"> * A conditianal assertion has one or more conditions and one or more assertions: ut,...,un .... Vl....,v~ * An empty clause has no conditions or assertions: --.,.</Paragraph> <Paragraph position="6"> * A Horn clo,ue has at most one assertion; i.e. it is el null ther an empty clause, a denial, a simple assertion, or a conditional assertion of the form ut ..... ,% --4, v.</Paragraph> <Paragraph position="7"> For any concept a in an assertion vi, there may be a concept b in a condition u/ that is declared to be coreferent with a.</Paragraph> <Paragraph position="8"> Informally, a sequent states that if all of the conditions are true, then at least one of the assertions must be true. A se. quent with no conditions is an unconditional assertion; if there are two or more assertions, it states that one must be true, hut it doesn't say which. Multiple asserth)ns are necessary for generality, but in deductions, they may cause a model to split into models of multiple altei'native worlds. A sequent with no assertions denies that the combination of conditions can ever occur. The empty clause is an unconditional denial; it is selfcontradictory. Horn clauses are special cases for which deductions are simplified: they have no disjunctions that cause models of the world to split into multiple alternatives.</Paragraph> <Paragraph position="9"> Definition: Let C be a collection of canonical graphs, and let s be the sequent ut ..... Un -', vl ..... vm.</Paragraph> <Paragraph position="10"> * If every condition graph is covered by some graph in C, then the conditions are said to be salisfied.</Paragraph> <Paragraph position="11"> * If some condition graph is not covered by any graph in C, then the sequent s is said to be inapplicable to C.</Paragraph> <Paragraph position="12"> If n---0 (there are no conditions), then the conditions are trivially satisfied.</Paragraph> <Paragraph position="13"> A sequent is like a conditional assertion in Belnap's sense: When its conditions are not satisfied, it asserts nothing. But when they are satisfied, the assertions must be added to the current context. The next axiom states how they are added. Axiom: Let C be a collection of canonical graphs, and let s be the sequent ul ..... u, -,- v~ ..... v,,,. If the conditions of s are satisfied by C, then s may be applied to C as follows: * If m,=l) (a denial or the empty clause), the collection C is said to be blocked.</Paragraph> <Paragraph position="14"> * If m=l (a Horn clause), a copy of each graph ui is joined to some graph in C by a covering join. Then the assertion v is added to the resulting collection C'.</Paragraph> <Paragraph position="15"> * If m>2, a copy of each graph ui is joined to some graph in C by a covering join. Then all graphs in the resulting collection C' are copied to make m disjoint c~)llections identical to C'. Finally, for each j from I to rn, whe assertion v I is added to the j-th copy of C'. After an assertion v is added to one of the collections C', each concept in v that was declared to be coreferent with some concept b in one of the conditions ui is joined to that concept to which b was joined.</Paragraph> <Paragraph position="16"> When a collection of graphs is inconsistent with a sequent, they are blocked by it. If the sequent represents a fundamental law about the world, then the collection represents an impossible situation. When there is only one assertion in an applicable sequent, the collection is extended. But when there are two or more assertions, the collection splits into as many successors as there are assertions; this splitting is typical of algorithms for dealing with disjunctions. The rules for applying sequents are based on Beth's semantic tableaux f1955), but the computational techniques are similar to typical AI methods of production rules, demons, triggers, and monitors. Deliyanni and Kowalski (1979) relate their algorithms for logic in semantic networks to the resolution principle. This relationship is natural because a sequent whose conditions and assertions are all atoms is equivalent to the standard clause form for resolution. But since the sequents defined in this paper may be arbitrary conceptual graphs, they can package a much larger amount of information in each graph than the low level atoms of ordinary resolution. As a result, many fewer steps may be needed to answer a question or do plausible inferences.</Paragraph> <Paragraph position="17"> 4. Laws, Facts, and Possible Worlds Infinite families of p~ssible worlds are computationally intractable, hut Dunn (1973) showed that they are not needed for the semantics of modal logic. He considered each possible world w to be characterized by two sets of propositions: laws L and facts F. Every law is also a fact, but some facts are merely contingently true and are not considered laws. A proposition p is necessarily true in w if it follows from the laws of w, and it is possible in w if it is consistent with the laws of w. Dunn proved that semantics in terms of laws and facts is equivalent to the possible worlds semantics.</Paragraph> <Paragraph position="18"> Dunn's approach to modal logic can be combined with Hintikka's surface models and AI methods for handling defaults. Instead of dealing with an infinite set of possible worlds, the system can construct finite, but extendible surface models. The basis for the surface models is a canon that contains the blueprints for assembling models and a set of laws that must be true for each model. The laws impose obligatory constraints on the models, and the canon contains common background information that serves as a heuristic for extending the models.</Paragraph> <Paragraph position="19"> An initial surface model would start as a canonical graph or collection of graphs that represent a given set of facts in a sentence or story. Consider the story, Mary hit the piggy bank with a hammer. She wanted to go to the movies with Janet. but she wouldn't get her allowance until Thursday. And today was only Tuesday.</Paragraph> <Paragraph position="20"> The first sentence would be translated to a conceptual graph like the one in Section 2. Each of the following sentences would be translated into other conceptual graphs and joined to the original graph. But the story as stated is not understandable without a lot of background information: piggy banks normally contain money; piggy banks are usually made of pottery that is easily broken; going to the movies requires money; an allowance is money; and Tuesday precedes Thursday. null Charniak (1972) handled such stories with demons that encapsulate knowledge: demons normally lie dormant, but when their associated patterns occur in a story, they wake up and apply their piece of knowledge to the process of understanding. Similar techniques are embodied in production systems, languages like PLANNER (Hewitt 1972), and knowledge representation systems like KRL (Bobrow & Winograd 1977). But the trouble with demons is that they are unconstrained: anything can happen when a demon wakes up, no theorems are possible about what a collection of demons can or cannot do, and there is no way of relating plausible reasoning with demons to any of 'the techniques of standard or non-standard logic.</Paragraph> <Paragraph position="21"> With conceptual graphs, the computational overhead is about the same as with related AI techniques, but the advantage is that the methods can be analyzed by the vast body of techniques that have been developed in logic. The graph for &quot;Mary hit the piggy-bank with a hammer&quot; is a nucleus around which an infinite number of possible worlds can be built. Two individuals, Mary and rlcc~Y-a^NK:iZzloL are fixed, but the particular act of hitting, the hammer Mary used, and all other circumstances are undetermined. As the story continues, some other individuals may be named, graphs from the canon may be joined to add default information, and laws of the world in the form of sequents may be triggered (like demons) to enforce constraints. The next definition introduces the notion of a world bas~ that provides the building material (a canon) and the laws (sequents) for such a family of possible worlds.</Paragraph> <Paragraph position="22"> Definition: A world basis has three components: a canon C, a finite set of sequents L called laws, and one or more finite collections of canonical graphs {Ct ..... Co} called contexts. No context C~ may be blocked by any law in L.</Paragraph> <Paragraph position="23"> A world basis is a collection of nuclei from which complete possible worlds may evolve. The contexts are like Hintikka's surface models: they are finite, but extendible. The graphs in the canon provide default or plausible information that can be joined to extend the contexts, and the laws are constraints on the kinds of extensions that are possible.</Paragraph> <Paragraph position="24"> When a law is violated, it blocks a context as a candidate for a possible world. A default, however, is optional; if contradicted, a default must be undone, and the context restored to the state before the default was applied. In the sample story, the next sentence might continue: &quot;The piggy bank was made of bronze, and when Mary hit it, a genie appeared and gave her two tickets to Animal House.&quot; This continuation violates all the default assumptions; it would be unreasonable to assume it in advance, but once given, it forces the system to back up to a context before the defaults were applied and join the new information to it. Several practical issues arise: how much backtracking is necessary, how is the world basis used to develop possible worlds, and what criteria are used to decide when to stop the (possibly infinite) extensions. The next section suggests an answer.</Paragraph> </Section> <Section position="4" start_page="41" end_page="43" type="metho"> <SectionTitle> 5. Game Th~ Se~md~ </SectionTitle> <Paragraph position="0"> The distinction between optional defaults and obligatory laws is reminiscent of the AND-OR trees that often arise in AI, especially in game playing programs. In fact, Hintikka (1973, 1974) proposed a game theoretic semantics for testing the truth of a formula in terms of a model and for elaborating a surface model in which that formula is true. Hintikka's approach can be adapted to elaborating a world basis in much the same way that a chess playing program explores the game tree: * Each context represents a position in the game.</Paragraph> <Paragraph position="1"> * The canon defines \[Sossible moves by the current player, * Conditional assertions are moves by the opponent.</Paragraph> <Paragraph position="2"> * Denials are checkmating moves by the opponent.</Paragraph> <Paragraph position="3"> * A given context is consistent with the laws if there exists a strategy for avoiding checkmate.</Paragraph> <Paragraph position="4"> By following this suggestion, one can adapt the techniques developed for game playing programs to other kinds of reasoning in AI.</Paragraph> <Paragraph position="5"> Definition: A game over a world basis W is defined by the following rules: * There are two participants named Player and Oppom~nt. null * For each context in W, Player has the first move. * Player moves in context C either by joining two graphs in C or by selecting any graph in the canon of W that is joinable to some graph u in C and joining it maxireally to u. If no joins are possible, Player passes. Then Opponent has the right to move in context C.</Paragraph> <Paragraph position="6"> * Opponent moves by checking whether any denials in W are satisfied by C. If so, context C is blocked and is deleted from W. If no denials are satisfied, Opponent may apply any other sequent that is satisfied in C. If no sequent is satisfied, Opponent passes. Then Player has the right to move in context C.</Paragraph> <Paragraph position="7"> * If no contexts are left in W, Player loses.</Paragraph> <Paragraph position="8"> * If both Player and Opponent pass in succession, Player wins.</Paragraph> <Paragraph position="9"> Player wins this game by building a complete model that is consistent with the laws and with the initial information in the problem. But like playing a perfect game of chess, the cost of elaborating a complete model is prohibitive. Yet a computer can play chess as well as most people do by using heuristics to choose moves and terminating the search after a few levels. To develop systematic heuristics for choosing which graphs to join, Sown (1976) stated rules similar to Wilks' preference semantics ( 1975).</Paragraph> <Paragraph position="10"> The amount of computation required to play this game might be compared to chess: a typical middle game in chess has about 30 or 40 moves on each side, and chess playing programs can consistently beat beginners by searching only 3 levels deep; they can play good games by searching 5 levels. The number of moves in a world basis depends on the number of graphs in the canon, the number of laws in L, and the number of ~aphs in each context. But for many common applications, 30 or 40 moves is a reasonable estimate at any given level, and useful inferences are possible with just a shallow search. The scripts applied by Schank and Abelson (1977), for example, correspond to a game with only one level of look-ahead; a game with two levels would provide the plausible information of scripts together with a round of consistency checks to eliminate obvious blunders.</Paragraph> <Paragraph position="11"> By deciding how far to search the game tree, one can derive algorithm for plausible inference with varying levels of confidence. Rigorous deduction similar to model elimination (Loveland 1972) can be performed by starting with laws and a context that correspond to the negation of what is to be proved and showing that Opponent has a winning strategy. By similar transformations, methods of plausible and exact inference can be related as variations on a general method of reasoning. 6. Appendix: Summary of the Formalism This section summarizes axioms, definitions, and theorems about conCeptual graphs that are used in this paper. For a more complete discussion and for other features of the theory that are not used here, see the eartier articles by Sown (1976, 1978).</Paragraph> <Paragraph position="12"> which must be attached to a concept. If the relation has n arcs. it is said to be n-adic, and its arcs are labeled I, 2 ..... n. The most common conceptual relations are dyadic (2-adic), but the definition mechanisms can create ones with any number of arcs. Although the formal defin/tion says that the arcs are numbered, for dyadic relations. arc I is drawn as an arrow pointin8 towards the circle, and arc 2 as an arrow point/aS away from the circle.</Paragraph> <Paragraph position="13"> Axiom I: There is a set T of type labeLv and a function type. which maps concepts and conceptual relations into T.</Paragraph> <Paragraph position="14"> * If rypefa)=type(b), then a and b are said to be of the same tXpe. * Type labels are partially ordered: if (vpe(a)<_typefhL then a is said to be a subtype of b.</Paragraph> <Paragraph position="15"> * Type labels of concepts and conceptual relations arc disjoint, noncomparable subsets nf T: if a is a concept and * is a conceptual relation, then a and r may never he of the same type, nor may one be a subtype of the other.</Paragraph> <Paragraph position="16"> Axiom 2: There is a set I=\[il, i2, i3 .... } whose elements are called individual markers. The function referent applies to concepts: If a is a concept, then referentla) is either an individual marker in I or the symbol @, which may be read any.</Paragraph> <Paragraph position="17"> * When referentla) ~&quot; l, then a is said to be an individual concept. * When referent(a)=@, then a is said to be a genertc concept. In diagrams, the referent is written after the type label, ~parated by a colon. A concept of a particular cat could be written as ICAT:=41331. A genetic concept, which would refer to any cat, could be written ICA'r:tiiH or simply \[CATI. In data base systems, individual markers correspond to the surrogates (Codd 1979). which serve as unique internal identifiers for external entities. The symbol @ is Codd's notation for null or unknown values in a data base. Externally printable or speakable names are related to the internal surrogates by the next axiom.</Paragraph> <Paragraph position="18"> Axiom 3: There is a dyadic conceptual relation with type label NAME. If a relation of type NAME occurs in a conceptual graph, then the concept attached to arc I must be a subtype of WORD, and the concept attached to arc 2 must be a subtype of ENTITY. If the second concept is individual, then the first concept is called a name of that individual. The following graph states that the word &quot;Mary&quot; is the name of a particular person: \[&quot;Mary&quot;\]-.=.tNAME)-=.lPERSON:i30741. if there is only one person named Mary in the context, the graph could be abbreviated to just \[PERSON:Mary\], The conformity relation says that the individual for which the marker i is a surrogate is of type t. In previous papers, the terms permissible or applicable were used instead of conforms to. but the present term and the symbol :: have been adopted from ALGOL-68. Suppose the individual marker i273 is a surrogate for a beagle named Snoopy. Then BEAGLE::i273 is true. By extension, one may also write the name instead of the marker, as BEAGLE=Snoopy. By axiom 4, Snoopy also conforms to at\] supertypes of following properties are true: * They are of the same type: type(a)-typefb).</Paragraph> <Paragraph position="19"> * Either referent(a)=referent(b), referent(a)=.@, or referent(b)=.@. Two star graphs with conceptual relations r and s are said to be joinable if * and s have the same number of arcs, type(r),=rype(s), and for each i. the concept attached to arc i of r is joinable to the concept attached to arc i of s.</Paragraph> <Paragraph position="20"> Not all combinations of concepts and conceptual relations are meaningful. Yet to say that some graphs are meaningful and others are not is begging the question, because the purpose of conceptual graphs is to form the basis of a theory of meaning, To avoid prejudging the issue, the term canonical is used for those graphs derivable from a designated set called the canon. For any given domain of discourse, a canon is dcl'incd that rules out anomalous combinations.</Paragraph> <Paragraph position="21"> conceptual relations in T and wilh referents either let *~r markers in I.</Paragraph> <Paragraph position="22"> The number of possible canonical graphs may be infinite, but the canon contains a finite number from which all the others can be derived. With an appropriate canon, many undesirable graphs are ruled out as noncanonical, but the canonical graphs are not necessari!y true. T~) ensure that only truc graphs are derived from true graphs, the laws discussed in Section 4 eliminate incnnsistcnt combinations.</Paragraph> <Paragraph position="23"> Axiom 5: A conceptual graph is called canontrol eithcr if it is in the c:tnq)n or if it is derivable from canonical graphs by ()ne of the following canonic'a/formation *ules. I,et u and v be canonical graphs (u and v may be the same graph).</Paragraph> <Paragraph position="24"> * Copy: An exact copy of u is canonical.</Paragraph> <Paragraph position="25"> * Restrict: Let a be a concept in u, and let t be a type label where t<_typela) and t::referenrfa). Then the graph obtained by changing the type label of a to t and leaving *eferent(a) unchanged is canonical. null * Join on aconcept: Let a be aconcept in u, and baconcept in v If a and b are joinable, then the graph derived by the followin~ steps is canonical: First delete b from v; then attach to a all arcs of conceptual relations that had been attached to b. If re/'eremfa) e I, then referent(a) is unchanged; otherwise, referent(a) is replaced by referent(b).</Paragraph> <Paragraph position="26"> * Join on a star: Let r be a conceptual relation in u. and x a conceptual relation in v. If the star graphs of r and s are joinable. then the graph derived by the following steps is canonical: First delete s and its arcs from v; then for each i. join the concept attached to arc i of * to the concept that had been attached to arc i of s.</Paragraph> <Paragraph position="27"> Restriction replaces a type label in a graph by the label of a subtype: this rule lets subtypes inherit the structures that apply to more general types. Join on a concept combines graphs that have concepts of the same type: one graph is overlaid on the other so that two concepts of the same type merge into a single concept; as a result, all the arcs that had been connected to either concept arc connected to the single merged concept. Join on a star merges a conceptual relation and all of its attached concepts in a single operation.</Paragraph> <Paragraph position="28"> Definition 6: Let v be a conceptual graph, let v, be a subgraph of v in which every conceptual relation has exactly the same arcs as in v. and let u be a copy of v, in which zero or more concepts may be restricted to subtypes. Then u is called a projection of v. and C/, is called a projective ortgin of u in v.</Paragraph> <Paragraph position="29"> The main purpose of projections is to define the rule of join on a common projection, which is a generalization of the rules for joining on a concept or a star.</Paragraph> <Paragraph position="30"> Definition 7: If a conceptual graph u is a projection of both v and w. it is called a common projection of v and w, Theorem l: If u is a common projection of canonical graphs t, and w, then v and w may be joined on the common projection u to form a canonical graph by the following steps: The concepts and conceptual relations in the resulting graph consist of those in v-t~, w-~, and a copy of u.</Paragraph> <Paragraph position="31"> Definition 8: If v and w are joined on a common projection u. then all concepts and conceptual relations in the projective origin of u in v and the projective origin of u in ~v are said to be covered by the join. in particular, if the projective origin of u in v includes all of v. then the entire graph v is covered by the join. and the join is called a covering join of v by w, Definition 9: Let v and w be joined on a common projection u. The join is called extendible if there exist some concepts a in v and b in w with the following properties: * The concepts a and b were joined to each other.</Paragraph> <Paragraph position="32"> * a is attached to a conceptual relation * that was not covered by the join.</Paragraph> <Paragraph position="33"> * b is attached to a conceptual relation s that was not covered by the join.</Paragraph> <Paragraph position="34"> * The star graphs of r and s are joinable.</Paragraph> <Paragraph position="35"> If a join is not extendible, it is called mn.ximal.</Paragraph> <Paragraph position="36"> The definition of maximal join given here is simpler than the one given in Sown (1976), but it has the same result. Maximal joins have the effect of Wilks' preference rules (1975) in forcing a maximum connectivity of the graphs. Covering joins are used in Section 3 in the rules for applying sequeots.</Paragraph> <Paragraph position="37"> Theorem 2: Every covering join is maximal.</Paragraph> <Paragraph position="38"> Sown (1976) continued with further material on quantifiers and procedural attachments, and Sown (1978) continued with mechanisms for defining new types of concepts, conceptual relations, and composite entities that have other entities as parts. Note that the terms sort, aubaort, and well-formed in Sown (1976) have now been replaced by the terms type, subtype, and canonical.</Paragraph> </Section> <Section position="5" start_page="43" end_page="43" type="metho"> <SectionTitle> 7. Acknowledgment </SectionTitle> <Paragraph position="0"> I would like to thank Charles Bontempo, Jon Handel, and George Heidorn for helpful comments on earlier versions of this paper.</Paragraph> </Section> class="xml-element"></Paper>