File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/79/j79-1013_metho.xml

Size: 86,699 bytes

Last Modified: 2025-10-06 14:11:10

<?xml version="1.0" standalone="yes"?>
<Paper uid="J79-1013">
  <Title>Conclusion APPENDIX A . Causal Chain Expansion ................. Computkr Example 52 APPENDIX E . Inference-Reference Relaxation Cycle</Title>
  <Section position="1" start_page="0" end_page="0" type="metho">
    <SectionTitle>
AB S'T RAC T
</SectionTitle>
    <Paragraph position="0"> Ally theory ot languaya must also be a theory of inference and memory. It does not appear to be possible to &amp;quot;understand&amp;quot; even t~le simplest of utterances in a contextually meaningful way in a system in which language fails to interact with a language-free memory and belief system, or in a system w!lich,lacks a spontaneous inference reflex.</Paragraph>
    <Paragraph position="1"> People apply a tremendous amount of cognitive effort to understanding the meaning content of language in context. Yost of this effort is of the form of spontaneous conceptual inferences wnic:~ occur in a language-independent meaning environment. I have developed a theory of how humans process the meaning content of utterances In context. '~'11~ theory is called Conceptual Llcmory, and has been implemented l~y a computer program whic:~ is designed to accept as input analyzed Cnnceptual Dependency (Schank et al.) meaning graphs, to generate many conceptual inferences as automatic responses, then to identify points qf contact among those inferences in &amp;quot;infe~ence space&amp;quot;. Points of contact establish new patnways tilrough existing memory structures, and hence &amp;quot;knit&amp;quot; each utterance in witn its surrounding context.</Paragraph>
    <Paragraph position="2"> Sixteen classes of conceptual inference have been identified and implemented, at least at the prototy2e level. Tnesc classes appear to be essential to all higher-level language corn7re;1ension processes. Among them are causative/resultative (those which predict cause and effect relations) , motivational (those which predict and describe actors' intentions), enablement (those which predict the surrounding context of actions), state-duration (those which predict the fuzzy duration of various states in the world) normative (those which assess .the &amp;quot;normality&amp;quot; of a piece of information - how unusual it is) , and specification (tilose which predict and filltin missing conceptual information in a language-communicated meaning graph).</Paragraph>
    <Paragraph position="3"> Interactions of conceptual inference witi~ the language processes of (1) word sense promotion in context, and (2) identification of referents to memory tokens are discussed. A theoretically important inference-reference &amp;quot;relaxation cycle&amp;quot; is idenk~fied, and its solution discussed.</Paragraph>
    <Paragraph position="4"> The theory provides the basis of a computationally effective model of language comprehension at a deep conceptual level, and should therefore be of interest to computational linguists, psychologists and computer scientists alike.</Paragraph>
  </Section>
  <Section position="2" start_page="0" end_page="0" type="metho">
    <SectionTitle>
TABLE OF CONTENTS
</SectionTitle>
    <Paragraph position="0"> 1 . The Need for a Theory ........ of Conceptual Memory and Inference 2 . A Simple Example .......................... 7 3 . Background .........m...................... 10 4 . A Brief Overview of the Conceptual ...... Memory's Inference Control Structure 15 5 . The Sixteen Theoretical Classes of Conceptual Inference ................... 20 5.1. CLASS 1: Specification Inferences ..... 21 5.2. CLASSES 2 and 3: Resultative ............. and Causative Inferences 25 5.3. CLASS 4: Motivational Inferences ..... 26 ......... 5.4. CLASS 5: Enabling Inferences 27 5.5. CLASS 6: Action Prediction Inferences . 28 5.6. CLASS 7: Enablement Prediction Inferences ........................... 30 ......... 5.7. CLASS 8: Function Inferences 31 5.8. CLASSES 9 and 10: Missing Enablement and Intervention Inferences .......... 33 5.9, CLASS 11: Knowledge Propagation Inferences ........................... 34 5.10. CLASS 12: Normative Inferences ...... 35 5.11. CLASS 13: State Duration Inferences .. 38 5.12. CIiASSES 14 and 15: Feature and Situation Inferences ................ 41 5.13. CLASS 16: Utterance Intent Inferences .......................... 42 6 . Summary of the Inference Component ........ 43 7 . The Inference-Reference Relaxation Cycle in Conceptual Memory ...................... 44 8 . Word Sense Promotion. and Implicit Concept Activation in the Conceptual Memory ....... 48 Conclusion APPENDIX A . Causal Chain Expansion ................. Computkr Example 52 APPENDIX E . Inference-Reference Relaxation Cycle ................... Computer Example 57 REFERENCES .................................... 62 1. The ~leed for a Theory of Conceptual Memory and Inference  aesearch in natural language over the past twenty years has been focussed primarily on processes relating to the analysis of individual sentences (parsing). Most of the early work was devoted to syntax. Recently, however, there has been a considerable thrust in the areas of semantic, and importantly, conceptual analysis (see (~2) , (Ml) , (Sl) and (C1) for example) . Whereas a syntactic analysis elucidates a sentence's surface syntactic structure, typically by producing some type of phrase-structure parse tree, conceptual analysis elucidates a sentence's meaning (the &amp;quot;~icture&amp;quot; it produces), typically via production of an interconnected network of concepts which specifies the interrelationships among the cohcepts referenced by the words of the sentence. On the one hand, syntactic sentence analysis can more often than not be performed &amp;quot;locally&amp;quot; that is, on single sentences, disregarding any sort of global context; and it is reasonably clear that syntax has generally very little to do with the meaning of the thoughts it expresses. Hence, although syntax is an impoatant link in the understanding chain, it is little more than an abstract system of encoding which does not for the most part relate in any meaningful way to the information it encodes. On the other hand, conceptual sentence analysis, by its very definition, is forced into, the realm of gen,e~d~ WULLU knowledge; a conceptual analyzer's &amp;quot;syntax&amp;quot; is the set of rules which can produce the range of all &amp;quot;reasonable&amp;quot; events that might occur in the real world. Hence, in order to parse corlceptually, the conceptual, analyzer must lnteract with a repository of world knowledge and world knowledge handlers (inferential processes). This need for such an analyzer-accessible korld knowledge repository has provided part sf the morivation for the development of the following theory of conceptual inference and memory however, the production of a conceptual network from an isolated sentence is only the first step in the understanding process. After this first step, the real question is: what happens to this co~ceptual network after it has been produced by the analyzer? That is, if we regard the conceptual analyzer as a specialized component of a larger memory, then the allocation of memory resources in reaction to each sentence follows the pattern: (phase 1) get the sentence into a form which is understandable, then (phase 2) understand it! It is a desire to characterize phase 2 which has served as the primary motivation fbr developing this theory of memory and inference. In this sense, the theory is intended to be a charting-out oef the kinds of processes which must surely occur each time a sentence's conceptual network enters the system. Although it is not intended to be an adequate or verifiable model of how these processes miqht actually occur in humans, the theory described in this paper has nevertheless been implemented hs a computer model under PDP-10 Stanford 1.6 LISP. While the implementation follows as best it can an intuitively correct approach to the various processes described, the main intent of the underlyinghheory is to propose a set of memory processes which, taken together, could behave in a manner similar to the way a human behaves when he &amp;quot;understands language&amp;quot; .</Paragraph>
  </Section>
  <Section position="3" start_page="0" end_page="0" type="metho">
    <SectionTitle>
2. A Simple Example
</SectionTitle>
    <Paragraph position="0"> The attentive human mind is a volatile processor. My conjecture is that information simply cannot be put into it in a passive way; there are very primitive inference reflexes in its logical architecture which each input meaning stimulus triggers. I will call these primitive inference reflexes &amp;quot;conceptual inferences&amp;quot;, and regard them as one class of subconscious memory process. I say &amp;quot;subconscious&amp;quot; because the concern is with a relatively low-level stratum of &amp;quot;higher-level cognition&amp;quot;, particularly insofar as a human applies it to the understanding of language-communicated information. The eventual goal is to synthesize in an artificial system the rbugh flow of information which occurs in any normal adult response to a meaningfully-connected sequence of natural language utterances. This of course is a rather ambitious project.</Paragraph>
    <Paragraph position="1"> In this paper I will discuss some important classes of conceptual inference and their relation to a specific formalism I have developed (Rl) .</Paragraph>
    <Paragraph position="2"> Let me first attem?t, by a fairly ludicrous example, to convince you (1) that your mind is more than a simple receptacle PSor data, and (2) that you often have little control over the thoughts that pop up in response to something you perceive. Read the following sentence, pretending you were in the midst of an absorbing novel'</Paragraph>
  </Section>
  <Section position="4" start_page="0" end_page="0" type="metho">
    <SectionTitle>
EARLIER THAT EVENING, MARY SAID SHE HAD KILLED HERSELF.
</SectionTitle>
    <Paragraph position="0"> One of two things probably occurred: either you chose as referent of &amp;quot;herself &amp;quot;- some person other than Mary (in which case everything works out fine), or (as many people seem to do) you first identified &amp;quot;herself&amp;quot; as a reference to Mary. In this case, something undoubtedly seemed awry: you ~ealized elther that your choice of referent was erroneous, that the. sentence was part of some unspecified &amp;quot;weird&amp;quot; context, or that there was simply an out-and-out contradiction. Of course, all three interpretations</Paragraph>
  </Section>
  <Section position="5" start_page="0" end_page="0" type="metho">
    <SectionTitle>
II
</SectionTitle>
    <Paragraph position="0"> are unusual in some sense because of a patzntly obvious&amp;quot; contradiction in the picture this utterance elicits. The sentence is syntactically aqd semantically impeccable; only when we &amp;quot;think about it&amp;quot; does the bis fog horn upstairs a1ert:us to the implicit contradiction:</Paragraph>
  </Section>
  <Section position="6" start_page="0" end_page="0" type="metho">
    <SectionTitle>
MARY SPEAK AT TIME T
enablement infer-ence
MARY AEIVE AT TIME T
MARY NOT ~IVE- AT TIME T
</SectionTitle>
    <Paragraph position="0"/>
  </Section>
  <Section position="7" start_page="0" end_page="0" type="metho">
    <SectionTitle>
MARY KILLS HERSELF AT TIME T-d
</SectionTitle>
    <Paragraph position="0"> Here is the argument: before reading the sentence, you probably had no suspicion that what you were about to read contalned an implicit contradictiun. Yet you probably discovered that contradiction effortlessly! Could there have been any a prior-i &amp;quot;goal direction&amp;quot; to the three simple inferences above? My conclusio~ is that there could not have been. If we view tne mind as a multi-dimensional &amp;quot;inference space&amp;quot;, then each incoming thought produces a spherical burst of activity about the point where it lands in this space (the place where the conceptual network representing it is stored). The horizon of this sphere consists of an advancing wavefront of inferences - spontaneous proDes Which are sent out from the point. Most will lose momentum and eventually atrophy; but a few will conjoin with inferences on the horizons of other points' spheres. The sum of these &amp;quot;points of contact&amp;quot; represents tne integration of the thought into the existing fabric of the memory in that each point of contact establishes a new pathway between the new thought and existing knowledge (or perhaps among several sxisting pieces of knowledge). This to me is a pleasing memory paradigm, and there is a tempting analogy to be drawn with neurons and actual physical wavefronts as proposed years ago by researchers such as John Eccles (El). The drawing of this analogy is, however, left for the pleasure of you, the reader.</Paragraph>
    <Paragraph position="1"> This killing example was of course more pedagogical than serious, since ic is a loaded ~tterance involving rather black and white, almost trivial lnterences. But it suggests a powerful low-level mechanics for general language comprehension. Later, I will refer you to an example which shows how the implemented model, called MEMORY and described in (Rl), reacts to the more interesting example MARY KISSED JOHN BECAUSE HE HIT BILL, which is,perceived in a particular context. It does so in a way that integrates the thought into the framework of that context and which results in a &amp;quot;causal chain expansion&amp;quot; involving six probabilistic inferences.</Paragraph>
  </Section>
  <Section position="8" start_page="0" end_page="20" type="metho">
    <SectionTitle>
3. Background
</SectionTitle>
    <Paragraph position="0"> Central to this theory are sixteen classes of spontaneous conceptual inferences. These classes are abstract enough to be divorced from any particular meaning representation formalism.</Paragraph>
    <Paragraph position="1"> However, since they were developed concurrently with a larger moiiel of conceptual memory (R1) which is functionally a part of a language comprehension system involving a conceptual analyzer and generator (MARGIE (S3)), it will help make the following presentation more concrete if we first have a brief look at the operation and goals of the conceptual memory, in the context of the com~lete language comprehension system.</Paragraph>
    <Paragraph position="2"> The memory adopts Schank et al.'s theory (Sl.S2) of Conceptual aependency (CD) as its basis for representation. CD is a theory of meaning representation which posits the existence of a small number of primitive actions (eleven are used by the conceptual memory), a number of primitive states, and a small set of connectives (links) which can join the actlons and states together into conceptual graphs (networks) . Typical -of the -links are : tne ACTOR-ACTION &amp;quot;main&amp;quot; link the ACTXON-OBJECT link 6 the CAUSAL link  mthe DIRECTIVE link % and the STATECHANGE link e=l -, Each primitive action has a case framework which defines conceptual slots which must be filled whenever the act appears in a conceptual graph. There are in addition TIME, Location and LNSTrumental llnks, and these, as are all conceptual cases, are obligatory, even if they must be irifsreiitially filled in by the conceptual memory (CM). Figure 1 illustrates the CD representation of the sentence MARY YISSED JOHN BECAUSE HE (JOHN) HIT BILL. That conceptual graph is read as follows: John propelled some unspecified object X from himself toward Bill, causing X to come into physical contact with Bill, and this entire event cause Mary to do something which resulted in her lips being in physical contact with John! Furthermore, the entire event occurred sometime in the past. Chapter 2 of (Rl) contains a fairly complete overview of the CD representation.</Paragraph>
    <Paragraph position="3"> Assuming the conceptual analyzer (see (R2) ) has constructed, in consultation with the CM, a conceptual graph of the sort typified by Tigure 1, the first step for the CM is to begin &amp;quot;integrating&amp;quot; it into some internal memory structure which is more amenable to the kinds of active infezence manipulations the CM wants to perform. ?his initial integration occurs in three stages. First is an initial attempt to replace the symbols (JOHN, MARY, BILL, X, etc.) by pointers to actual memory cance'pts and tokens of concepts. Each concept and token in the CM is represented by a unique L-ISP atom (such as C0347) which itself bears no intrinsic meaning. Instead, the essence of the concept or token is captured in a set of features associated with the symbol. Thus, for instance, an internal memory token with no features is simply &amp;quot;something&amp;quot; if it must be expressed by language, whereas the token illustrated in Figure 2 would represent part of our knowledge about Bill's friend Mary Smith,, a female human who owns a red Edsel, lives at 222 Avenue St., is 26 years old, and so forth. This set ofl features is called C0948's occurrence set, and is in'the implementation merely a set of pointers to all other memory structures in which C0948 occurs. The process of referent identification will attempt to isolate one token, or second best, a set of candidate tokens for each concept symbol in the incoming graph by meansof a feafure-intersecting algarithm described in (Rl).</Paragraph>
    <Paragraph position="4"> Reference identification is the first stage of the initial integration of the graph into internal memory structuxes. The</Paragraph>
    <Paragraph position="6"> (20718 is the token representing the Eclsel wilicil ilary owns, C0846 is tile token for Mary's place of residence;. C0654 is a time token with a numeric .value on the CM's time scale representing i.larP1s time of birth in 19481 va 1  form the beginning - inference,queue -- (input to the spontaneous inference component) , and (3) the storage of the graph dependency links themselves as pointers in the memory.8ust as for simple concepts and tokens, composite structures (actiqns and states) are stored under a unique internal syntbol, and this symbol may also have an occurrence set. In addition, there are several other properties associated with each composite structure S: the recency of S's activation by explicit reference (RECENCY), the recency of S's activation by implicit (inferential) reference (TOUCHED), the degree to which S is held to be true (STRENGTH), a list of other composite structures from which S arose in the memory - its inferential antecedants-(REASONS), and a liig of other composite structures in whose generation S'played a role as antecedant (OFFSPRING). RECENCY and TOUCBED are also pToperties of concepts and tokens, and are used in thebreferent identification process.</Paragraph>
    <Paragraph position="7"> Figure 3 shows the memory structures which result from the conceptual graph of Figure 1 after the initial integration. The net result of the initial integration is a set of starting memory structures, (actually, a list of pointers to their symbols, such as (C2496 C2301 (22207)). Each of these structures references memory concepts tokens and ~ther composite structures.</Paragraph>
    <Paragraph position="8"> Regarding the referent identificatian prosess, fur those concepts. and tokens which could not.be uhiquely identified, new temporary tokens will have beeA created, each having as its initial occurrence set a list of: what is khown about the entity so far.</Paragraph>
    <Paragraph position="9"> After the initial integration, the inference component is applied simultaneously to each memory structure (&amp;quot;point in inference space&amp;quot;) on the starting inference queue.</Paragraph>
    <Paragraph position="11"> (N is the numeric &amp;quot;now&amp;quot; on the CM's time scale) The internal memory structures resulting from the CD graph of Fig. 1.</Paragraph>
    <Paragraph position="12"> Figure 3  4. A Brief. Overview ot th.e Conceptual Memory's Inference Control Structure.</Paragraph>
    <Paragraph position="13">  The control structure which implen~ents the Cbl irlfcrcnce reflex is o breadth-first monitor whose queue at. any moment is a list of pointcrs to depencbhcy structures which have arisen by infcrencc from thc beginning sti-uctures isolated during the initial integration. It is the inference monitor's task to exmine each dependency structure on the queue in turn, isolate its predicate, prepare its arguments in a standard format, collect several ti11e: aspects from the structure's occurrence sct-, then call the inference*molecule associated with the predicate, passing along the arguments mil time info~mation.</Paragraph>
    <Paragraph position="14"> All inferential howledge in the (3rI is contained in inference molecules, which 1 ie in one-one correspondence with conceptual predicates. inference molecule is a structured LISP program which can perform arbitrary discrimination test's on a relevant dependency structure's features and features of all involved concepts and tokens, and ~d~ich &amp;an call on specialist programs to car1-y out standard test information retrieval functions. (&amp;quot;CAUSER&amp;quot; is an example of such 2 specialist. It will scan back causal sequences from structure S until it locates a volitional action, then it returns the actor of that action as the primary causing agent of S ) Inference molecules are hence multiple-response discrimination networks dlose responses are cenceptual inferences (of the various theoretical types to be described) which can be made from the clependency structure. Each potential inference within the inference nlolecule is called an inference atom.</Paragraph>
    <Paragraph position="15"> Tile contribution of an inference atom which has been found applicable to the dependency structure reports eight pieces of infonnation to a component of the monitor called the structure generator, whose job it is to embody each ne\i inference in a memory structure. These.eigh't pieces of information are the following: 1. a unique mnemnic which indicates to ~QIlich of the 16 theoretical clqsses the ne\c inference belongs (this mnemonic is associated liitT.1 the new st.ructure only tcmp~rarily on the inference queue for subsequent control purposes) 2. the &amp;quot;reference name&amp;quot; of the genernt-infi inf el-enGe atom (each atom has a unique name which is :issociatcd with the new memory structure for control purposes) 3. the dependency structure (a predicate which binds together several pointers to concepts, tokens and other structures), which is the substance of the new inference  a detault &amp;quot;significance factor&amp;quot; which is a rough, ad hoc measure of the inference's probable relative significance (this is used only if a more sophisticsrted process, to be described, fails) 5. a REASONS list, rihich is a list1 of all other stzuctures in the Chl which were tested by the discrimination net leading up to this inference atom. &amp;very dependency structure has a REASONS list recording how the structure arose, and the .REASONS list plays a vital role in the generation of certain tpes of inference  a &amp;quot;propagation strengrh factor&amp;quot; \\-hich, \chon multiplied by the SmY;TIls (degree of belief) of all structures on the WOES list, produces the SI'RENG'ITI of the nerc inference. (There is a need for better heuristics here incidentally -- see (Zl) for instance.) 7. a list of modifying structures (typically time aspects) which become the new inferred structure's initial occurrence set 8, propagation and strength factors for each mdifyihg struc, ture Fiyre 4 illustrates the small implemen-ted h%GCl&amp;WGE, (something undergoes n negative changc on some scale) inference molecule. It is inqludcd to communicate the gestalt rather than corrcct specifics dt this early stage of development.</Paragraph>
    <Paragraph position="16"> The two other main components of the inference monitor are the evaluaror and the structure merger. It is the function of the evaluator to detect exact ancl fuzzy contradirtions and confirmqtions (points of contact) bct~een eacn neli inference ds it arises and. e.ui.s t inp memor). depa~dcncy 5 true tures. Because &amp;quot;fuzziness&amp;quot; in the matching process implies ;lcccss to a vast number of heuristics (to ill&amp;tratc: uoulcl it be more like our friend. the  Lawyer or our friend the carpenter to own a radial arm saw?), the evaluator' delegates most of the matching responsibility to programs - again organized by conceptual predicates - called normality molecules (&amp;quot;N-molecules&amp;quot;) . N-molecules, which will1 be discussed more later, can apply detailed heuristics to ferret oqt fuzzy confirmations and contradictions. As I will describe, N-molecules also implement one class of conceptual inference Confirmations and contradictions discovered by the evaluator are noted on special lists which serve as sources for possible subsequent responses by the CM. In addition, confirmations lead. to invocation of the structure merger, which physically replaces the two matching structures by one new aggregate structure, and thereby knits together two lines of inference. As .events go, this is one of the most exciting in the CM.</Paragraph>
    <Paragraph position="17"> Inference cutoff occurs when the product of an inference's STRENGTH (likelihood) and its significance factor falls below a threshold (0.25) . This ultimately restricts the radlus of each sphere in the inference space, and in the current model, the threshold is set. low to allow considerable expansion.</Paragraph>
    <Paragraph position="18"> Figure 5 depicts the overall strategy of the inference monitor. (Rl) contains a fuller account of the inference control structure, whose description will be terminated at this point. Enter.. , sixteen- theoretical classes of conceptual inference which fuel this inference reflex.</Paragraph>
    <Paragraph position="20"/>
    <Paragraph position="22"> 5, The Sixteen Theoretical Classes of Cohceptual Inference.</Paragraph>
    <Paragraph position="23"> It is phenonienological that mbst of the human language experience focuses on actions, their intcndcd and/or resulting states, and tllc causality and enabling states which surround them. 'Tllore seems 'to be an inescapable core of notions related to actions. causation and cnnble~ent \t-l~ich alr~~ost anyone w11o introspects long enough 'wi 11 indcpcndently di scovcr . In his &amp;quot;Cold i(arriorU nlodel; ilbelson (Al) \?as pcrhq~s the first to attempt :I colilputationally formal systematization of this fundm~cntnl core of meaning relations. It is of tllc utmost primacy in his system, which models thc political ideologies and behavior patterns of a rabid right-wingcr, to discaver 'and relate the underlying i~u~osCSy enablemcnt and causality surrouncling events in some 1l)pothctical intcrnat-ional scenerio or crisis. Again, in Scha~k ~t alls CD theory, the sale emphasis arose more or less incle~~endcntly in a system of meaning represelltdtion for everyday utterances: causality, actions, state-changes and enablement were recurring.themcs. Not su~l-~risfngl\~, tile same notions have emerged as central in my analysis of the infcrencc reflex: over half of the-16 classes relate to this &amp;quot;nction-intcntion-cnusalityenablement - kno~qlecige&amp;quot; conp1c.x.</Paragraph>
    <Paragraph position="24"> In the following descriptions of these 16 classes, keep in mind that all types of-inference are apelicable to every subcomponent of every utterance, and that the 01 is essentially a parallel silwlation. Also bear in mind that the inference evaluator is constantly perf61-rning matching operations on each new inference in order to detect interesting' interactions between inference spheres. It sllould also be emphasized that conccl7tuni inferences are l~ohabilistic and predictive in naturc, and that . 1w . makine then~ in nl~parently iiasteful quantities, the 01 is not seeking one- r'esult or tnith. Rather, inferent fa1 expansion . is a,n endeavor which broadens each piece of informat ion into its surr~undillg spectrum to fill out the inCormation-rich situation to ~Ishich the information- lean utterancc ]night refer . The ~~1''s - gropings will resable more closely the solutiorl of il jigsaw puzzle than the more goal-directed solution of a cross\trord p..~zzlc.</Paragraph>
    <Paragraph position="25"> The following discussions can only sketch the main ideas behind each inference class. .See (~1) for a more cornprehensive~ treatment.</Paragraph>
    <Paragraph position="26">  5.1 CLASS 1 : SPECIFICATION KNFERENCES PRINCIPLE : The Bl must be able to identify and attempt-to fill in each missing slot of an incoming conceptual graph.</Paragraph>
    <Paragraph position="27"> ~IPLES : **~ohn~.was driving home from work.</Paragraph>
    <Paragraph position="28"> He hit Bill's cat.</Paragraph>
    <Paragraph position="29">  (inference) It was a car which John propelled into the cat.</Paragraph>
    <Paragraph position="30"> **Jolln bought a chalk line.</Paragraph>
    <Paragraph position="31"> (inference) I-t was probably from a hardware store that John bought the chalk Jine.</Paragraph>
  </Section>
  <Section position="9" start_page="20" end_page="20" type="metho">
    <SectionTitle>
DISCUSSION:
</SectionTitle>
    <Paragraph position="0"> Our use of language presupposes a tremendous underlying lufoiqledge about the rcorld. Because of this, even in, say, the most explicit teclu~ical ~rit-ing, certain assumptions are made by the ~i~itar (speakera about the comprehender ' s knowledge - - that he can fill in the plethora of .detail surrounding each thought. In the 01, this corresponds to filling in all the missing conceptual slots in a graph.</Paragraph>
    <Paragraph position="1"> The utility of such a process is t~iofold'. Firsr, Cbl failures to specify a missing concept can serve as a source of requests..for moreinfarmation (or goals to seek out tlla't information by CM actions if.01 is controlling a robot). Second, by predictively completing the graph by application of general pattern howledge of the modeled world, novel relations qong specific concepts and ~okens will arise, and these can lead to potentially significant discoveries by other inferences.</Paragraph>
    <Paragraph position="2"> To illustrate, a very common missing slot is the instrumental case.</Paragraph>
    <Paragraph position="3"> lie generally leave it to tile imaginative powers of the hearer to su-mise the probable instrmental action by which some action occurred: (husband to wife) I went to Sl-lARS to'day.</Paragraph>
    <Paragraph position="4"> (wife to husband) How? I had the car all day! Here, wife fills in the instrumental slot as: &amp;quot;Husbimrl drove a car to SINIS&amp;quot; (clearly relying-on some specific hcur~st~cs,,sucl~ as tllc distaqce from thcir home to SEARS, etc.), and this led to her discovery of a contradiction.</Paragraph>
    <Paragraph position="5"> That she wy have been premature in the specification (and:had later to undo it) is of secondary importance to the phenomenon that she did. so spontaneously .</Paragraph>
    <Paragraph position="6"> In the CM.specification inferences, as all inferences, are implemented in the form of structured progrryns which realize discrimination nets whose fzern~inal nodes are concepts and tokens rather- than inferences, as in generrjl inference molecu'les. These specification procedures are called specifier molecules (&amp;quot;S-molecules&amp;quot;), and are quite similar to inference molecules. Fig. 6 sllows a small prototype of the PROPEL specifier molecule Mlich can predictively fill in the missing object of a PROPEL. aktion, as in llJohn hit Pete.&amp;quot; That p.irticular &amp;quot;specifier atom&amp;quot; is sensitive to context along one simple dimemion if the actor is holm t~ be grasping an object (this prototype doesn't care wnerner it's P wet noodle or a bludgeon), at the time of the action, the molecule will infer that it was the grasped object lihich \$as propelled, as in ItJohn picked up the flolier pot. He hit Pete.&amp;quot; Otherwise, the molecule will assume '%and of the actor&amp;quot;. This is ridiculously  oversimplified, but it represents a certain philosophy 1 \&lt;ill digress a moment to reveal.</Paragraph>
    <Paragraph position="7"> I, as many other people (see W1, H1, C1, for anstance), have come to be1 ieve that passive data structures are fundamentally awkward for repre- null / senting knowledge in any detail, partitularly for the purposes typified by (SPROG *-PROPEL* (UN v AC 08 OF lw #(XI xz ~31 ( This is a simp1 i f ied speci'f'ier (COND ( (NULL (CADR V) molecule co~itaining 'just an3 ob'ect (COND ( [AND dSETQ XI -CC (6ISA @- d%HAND) specit-iw atom. (NULL (CAPR v)] is @PART o- ACII) I test for lack of object s~eci f icat io (ether specifier atonls go here)  the hand of the acto+, assighing it to X1. It then checks to sea if an$h,ing is located in XI.</Paragraph>
    <Paragraph position="8"> If sq:lsth~nc_l is found, it is bound to X2, and the LOC structure which expresses this infornlat i on is.</Paragraph>
    <Paragraph position="9"> bound to X3. If nothing 4s located in the actor's hand, hls hand itself (XI) is inferred. The (LIST X3) in the first SP cal l is the list of REASOPJS (just.one here) j.ustifying the specificatibn ~f the-object the actor uas'h~lding as th2 03joct of ths PROPEL, The PROPEL specifier molecule.</Paragraph>
    <Paragraph position="10"> this sinil?le P1lOPLL exanplc. The needs for &amp;quot;special. case heuristics&amp;quot; in cvcn such a hodest ppcrat ion as this quicltly overtaEc one s prol~~rss at dcvis ing fJeclarativef' Ilremorx strucrures . Programs, on the other hand; are quick ant1 to the point, quite flexible, and have-as much &amp;quot;aesthetic potelltiB1&amp;quot;a.s e-ven the most elegant deatarative structures. life-sire procedure for this very narrow process of specifying the lnissillg object -of a PROPEL action ivould obviously rec~fiirc-many more tests for related contexts (&amp;quot;Jolm was racing clown. the hill on lzis bike. He hit ill. ) But independ-cnt of the f idel ity with ~,lhicil any given S-molecule executes its task, there is a vcry important claim buried hoth here and in the other inferential procedures in tile (31. It is that there are certain ccqtral tasks in wIi.ic11 the ~lecision -- proccs,~ ~~lust seek out the contcxt , rather tlli11 context seeking out the qpropriate decision process. In other ~qords, much of* the inference cal2ability requircs specialists I~IO ino~\r a priori exactly rihat dimensions of context could yossibly affect thc generation of cvery potential inference, and tl-lcse specialists carry out active probes to search. for those dbnensions before ory inference is generated. I can imagine no &amp;quot;uniform contcxt mechanism&amp;quot; which accounts for the i~uman's diverse ab2lity to attend to the relevant and ignore the sul&gt;erfluous. yv conjecture is that tile mecl~anism for contextual guidance of inference is highly distributed tl~roughout the memory rather than. ccntral i zed as a corngoncnt of the. memory' s control structurc .</Paragraph>
    <Section position="1" start_page="20" end_page="20" type="sub_section">
      <SectionTitle>
5.2 CLASSES 2 and 3: Rl$SULmTIVE and CAUSATIVE INFERENCES
</SectionTitle>
      <Paragraph position="0"> PRINCIPLE ? If an action is perceived, its probable resulting states should be inferred (RESULTATIVE).</Paragraph>
      <Paragraph position="1"> If -a state is perceived, the general nature of its probable causing action (or a specific action, if possible) should be inferred (CAUSATILT). EWPLES: **blary hit Pete with a rock.</Paragraph>
      <Paragraph position="2">  (inference) Peteprobably became hurt. (RESULrI'ATI\T.) **Bill was angry at Nary.</Paragraph>
      <Paragraph position="3"> (inference) Llary ' may have done something to Bi 11. (LlUSilTI\F)</Paragraph>
    </Section>
  </Section>
  <Section position="10" start_page="20" end_page="20" type="metho">
    <SectionTitle>
DISCUSSIO&amp;
</SectionTitle>
    <Paragraph position="0"> These t~o classes 6f inference embody the 01's ability to relate nqtions apd states 'in causaI sequences relative to the (S1I.s models of causality. In addition to serving as the basis for blOTIV:\TlOS2L in-ferences and contributing to the general e~~ansion process, ~lUS~~TI\l~ and RESULTATILT inferences ofbten achic~~c the rather exotic folm of undcrstanding I have termed &amp;quot;causal cllain expansion.&amp;quot; It is this process uhidl makes e~~licit the oft-abbreviated statements .of causality: language communicated predications or causality must aluays (if only subconscieusl!-) be explained in ternis of thc compre11ende~-'s models of causality, and failures to do so signal a lack of understanding and form another souKe 01 QI queries for mor-e information. Cabsal e.qansion successes on the other hand result in important intervening act ions and states \t?~ich draw out (&amp;quot;~oucI~&amp;quot;] surfounding context and serve as the basis far inferences in other categories. null Appendix A contains the computer printout from &gt;lE\X)R1,, tracing a causal expansion $or &amp;quot;Mary kissed John because he hit Bill&amp;quot; in a particular context \3hich makes the explanation plausible.</Paragraph>
    <Section position="1" start_page="20" end_page="20" type="sub_section">
      <SectionTitle>
5.3 CLASS 4 : MOTIVATIONAL .INFERENCES
</SectionTitle>
      <Paragraph position="0"> The desires (intentions) of an actor can frequently be inferred by analyzitlg the states (ESULTATIVE inferences) which result 'from an action he executes. These \VAN?- STATE patterns are essential to understanding and should be made in abundance.</Paragraph>
      <Paragraph position="1"> **John pointed out to Lkry that she hadn-. t done her chores.  ( inference) Mary may have felt guilty . (RESULTAT IVE) (inference) John may have wanted Mary to feel guilty. (MOTIVAT I ONAL ) **Andy blew on the hot meat.</Paragraph>
      <Paragraph position="2"> (inference) Andy may have wanted the meat to decrease in temperature.</Paragraph>
      <Paragraph position="3"> DISCUSSIOK:  -Language is a dual system of communication in that it usually cornhunicates both the actual, and, either explicitly or by inference, the  - null intentional-. ifliere the intentions of actors (the set of states .they desire) are not explicitly communicated, they must be inferred as the immediate causality of theI action. In the Ch! candidates for bDTIVATIONAL inferences are the WSULTATIVE inf er'ences f3 the oil can produce trorn an action A: for each RESULTATIVE inference Ri which the (34 could make from A , it conjectures that perhaps the actor of A des5,red I? i Since the generat ion of PRTIVATIONAL inference is dependent upon the results of another class of ihference (in general, the actor could have desired things causally removed by several inferences from the immediate resul tS of his act ion) , the bY3TIVATIOW inference process is implemented by a special proceuur* POSTSCAN 1~~11ich is invoked betweer &amp;quot;passes&amp;quot; of the nuin breadth-first monitor.</Paragraph>
      <Paragraph position="4"> Thesc passes will bc discussed nlorc later.</Paragraph>
      <Paragraph position="5"> Once generated, each &gt;DTIVATIOW w'i11 generally lead back\,lard, via CAUSATILT inferences, into an entirc causal chain ~dlich lead up to the action. This chain will f12quently connect in interesting ways with chains working fonuard from otllcr actions.</Paragraph>
    </Section>
  </Section>
  <Section position="11" start_page="20" end_page="20" type="metho">
    <SectionTitle>
5 .4 CLASS 5 : . E.NA.BL.1N.G. I.JSl@ERENCES
</SectionTitle>
    <Paragraph position="0"> Every action has a set of enabling conditions -- conditions which must be met for the action tc~ begin or proceed. The Of needs a rich h0\4ledge of these conditions (states), and should infer sutable ones to surrounJ. each perceived action.</Paragraph>
    <Paragraph position="1">  &amp;quot;~olm saw IrJary yesterday, (inference) Jahn and 'kry were in the same- general locjtion sometime 'yesterday.</Paragraph>
    <Paragraph position="2"> **Mary told Pete that John MS at the store(inference) Mary he~+r-tliat Jolm was at' the store. DJSCUSSION:  The example at the beginning df- the paper contained a co~~trndiction which could be discovered only ,making a very simple enabl ing inference about the action of speaking (any action for that mattcr) , namcly that tllc actor was alive at the time! Enabling inferences can fruitfully lend from the known action through the cnabl ing states to prdicnt ions about other actions the actor niight liave performed in order to set up bthe enabling states for the primary action.</Paragraph>
    <Paragraph position="3"> This idea is closely related to the next class of inference.</Paragraph>
    <Paragraph position="5"> Whenever some WANT STATE of a potential actor is knohn, predictions about .possible actions the actor might perform 'to achieve the state should be attempted.</Paragraph>
    <Paragraph position="6"> These predictions will provide potent potential points of contact for subsequently perceived actions.</Paragraph>
    <Paragraph position="7">  EXANPLES : **John '~vants some nails.</Paragraph>
    <Paragraph position="8"> (inference) John might attemp-t to acquire some na-ils. **Mary is furious at Rita.</Paragraph>
    <Paragraph position="9"> (inference) bhry might do something to hurt Rita.</Paragraph>
  </Section>
  <Section position="12" start_page="20" end_page="20" type="metho">
    <SectionTitle>
DISCUSSION:
</SectionTitle>
    <Paragraph position="0"> Action prediction inferences serve the' Inverse role of EfOT1VATIONA.L inierences, in that they work forward from a kno~m \VANT STATE pattern into predictions about future actions which could produce the desired state.</Paragraph>
    <Paragraph position="1"> Just as a bRTILT~lTIONAL inference relies upon RESUZTATIVE inferences, an ACTIOX PRLDICTION inference relies upon CI\USATIVE Wferences which can be generated from the state the potential actor desires. Because it is often impossible to anticipate the ef - ic causing action, ACTION PREDIDION inferences typically will be,'more general expectmies for a class of ~ossible actions. In the nails example abve, the general expentancy is, sim~ly that John may do something which normally causes a PTIWNS -(in CD terminology,, a change of location of some object) of some nails from some~\~here to himself. Often the nature of the desired state is such that some specific action can be predicted (&amp;quot;John is hungry. . . John will ingest food.&amp;quot;) By mahing specific actio~l predictions, a new crop of enabling inferences can be predicted (&amp;quot;John must be near food. &amp;quot;, etc. ) ,, and those conditions which cannot be assumed to be already satisfied can.serve as new \LAW-SpTEs of the actor.</Paragraph>
    <Paragraph position="2"> Thus it is through bDTIVATIOhlAL, ACTION PRFBICTION and ENABLING inferences that the CM can model [pr~dict) the pr-oblem-solving bel~avior of each actor.</Paragraph>
    <Paragraph position="3"> Predicted act i~ns ~lihich match up \ii th subsequently perceived conceptual input serve as a very real measure of the 8!'s success at piecing together connected discourse and stories. 1 suspect in addition that ACTION P~ICTION inferences will play a key role in the eventual solutions of the &amp;quot;co~ltextuhl guidance of inference&amp;quot; problem. Levy {Ll) ha5 some interesting beginning thoughts on this topic.</Paragraph>
    <Paragraph position="5"> infcrenccs, because the)* sttempt to predict an nction from an enabling state ~IO~II to be desired by a \iould-bc actor. I\llc-reas an ilCTIOS PRLDIC-TIOS infcrcncc- pi-cdic ts. 3. possible future act ion -to fulfill the dcsil-ed state, cnablemcnt prediction rlra\is out thr motivation oi the desire for the state 11)- identif'ying n probable action the statc \tould cnable. .Ilthougl~ (as \&lt;it11 :ILTIOS FIUiD1CI'IOS infercncc) it ill rrequcntly happen that no specific action ccul bc anticipated (since most states covld cnablc infinitely many , specific actions), it is ncvcrti~.clcss possible to' form general prcdictions about the nature of rrestrictions on) the enabled raction. If, for example, John mlks over to Slrll-y, then a RISU112'.-lTI\E inference is that he is near lary, and a ;\K)TI\-ATIOLU inference is .that he \cants to be. near \LIRY At this point an 1;UULDEZT PEDIC'l7OS infcrcnce can bc made to represent the gencral class of intcrnctions .Jolm might have in mind. This \&lt;ill be of'~pa~ticu1ar significance if, for instance, the 01 h~orit; already that Jolm had something t.o tcll her, since thcn the infcrr-cd act ion pattern 1,-oiould match quite ~cll the nction of verbal comu~ication in \ihichmthc state of spatial proximity plays a key enabling role.</Paragraph>
    <Paragraph position="6">  5.7 CLASS 8: FUNCTION INFERENCES PRINCIPLE: Control over some physical obJ-ect P is usually desired by a potential sctol: because )LC is algaged* in an alg.oritl?m in 1~11ich P plays a role. The 01 silould attempt to infer a probable action from its knowledge of Y's normal funct ion.</Paragraph>
    <Paragraph position="7"> EXQIPES: **Nary \cants the book.</Paragraph>
    <Paragraph position="8"> (inference) Mary probably wants to rend the book.</Paragraph>
    <Paragraph position="9"> *.*Jolm wants a knife..</Paragraph>
    <Paragraph position="10"> (inferenceJ J-oh probably \jWantsm to' cut something rri'th the knife,  **Bill IiBs to pour sundaes dam girls' dresses.</Paragraph>
    <Paragraph position="11"> Bill asked Pete to. had him the sundae. . .</Paragraph>
    <Paragraph position="12"> Function inferences E017:1 a very diverse, rather colorful subclass of LrZULE\KhTaPWICTIO?; infer~nce.</Paragraph>
    <Paragraph position="13"> The underlyiflg principle is that desire ofbinrmediate control over-an object is ~suall!~ tantamount ro a des-iresto use that objest in the norn:al function of objects of that t?~c, or in1 some function Wh 1s peculiar to the oLlj tct -and/or actor (third example abve). In the a!, normal functions of objects are stored as (hXT S Y) patterns, as in Eig. 8 for things that are printed matter. Before applying SFCT patterns, the Qi first checks for unu3ual relations involving fl~e specific actqr arid specific object (by escludifig paths iilhich include. tlle nodl ISA relations between, say sundae and food). Thus, that Bill is Imor~n to requn-e su~draes for slightly different algoritluns from m~st people vill be discovered and uscdin the prediction. The result of a</Paragraph>
  </Section>
  <Section position="13" start_page="20" end_page="20" type="metho">
    <SectionTitle>
INTERVENTION INFERENCES
</SectionTitle>
    <Paragraph position="0"> PRINCIPLE: If a \tlould-be actor is ho\m to have bceil ~msuccessful in achieving some action, it is often possible to infer the absence of ane of the action's enabling states (31ISS null 1% LIZi3L131EEUT]. If a potential actor is horn to desire that some action cease, itm can be predicted that he ~cill attempt to remove one or more enabling states of the action (I~~'LR~l~hTION] .</Paragraph>
    <Paragraph position="1"> EL: **&gt;hry couldn't see the horse-s finish.</Paragraph>
    <Paragraph position="3"> She. cursed the man ih front of her. . .</Paragraph>
    <Paragraph position="4"> **&gt;hry saw that Baby Billy was running out inlo the street. (inference) Fhry \&lt;ill pick Billy qff the ground (IhTER\l:\TIox) null She ran after him ...</Paragraph>
    <Paragraph position="5"> DISCUSSION: Closely related ta the other enabling inferences, these forms attempt to apply horiledge about enilblement relations to infer the cause of an action's failure (in the case of MISSING LWLBlEEVT),, or to predict a \L'h\'T XOT- STATE lihich can lead b~- act ion predict ion inference to possible actions of intervention on the part of the 1\A\Ter. In the second example above, Pkry (and the 01' first niust realize (via RESULTATWE inferences) the potentially undesirable consequences- of Billy's running action (i.e., possible hZGCWUGE for Billy) From th.is, the CbI can retrace, lo~ate the running action ~chich could lead to such a hTGmtYGE, collect its enabling states., then conj-ecture that Mary might desire ,to annul one or more af them. Mong them tor instance would be that Billy's feet be in intermittent PkB'S-COhT with the ground. From the (\\'ANT (NOT (EIIYSCONI' FEET GROUbLD))) structure, a- subsequent ACTION PN3ICTION inference can arise, predicting thnt bhryjmight put an end to (PtIYSCOK FEET GROUND) . This jvill in turn ~C~UJ 1-c1 her to be located near Billy, and that prediction \&lt;ill match the RliSUIJT12TIi7~ inicrence made from her directed running (the ncxt utterance input), hitting the two thoughts together.</Paragraph>
    <Paragraph position="7"> infer othcr knoideledge whicll must also be available to the adtor. Since nost conceptual inferences involve the intentioris lof actors, this modeling of .howledge is crucial.</Paragraph>
    <Paragraph position="9"> **John sa\i Maly beating Pete with a baseball bat.</Paragraph>
    <Paragraph position="10"> (interence) Johj probably s knew that Pete was getting ' hurt.</Paragraph>
    <Paragraph position="11"> **Betty asked Bill ior .tKe aspif in.</Paragraph>
    <Paragraph position="12"> (inference) Bill probably surmised tjiab Betty uasn'! t feel -ing well.</Paragraph>
    <Paragraph position="13"> DISCUSSION: Nodeling the knowiedge of potential actors is fundamentally difficult . Yet it is essential, since most all intention/prediction-related inferences n~ust be based in part on guesses about what Mo~ilcdge each actor has available to him at various times. The ChI currently models others' howledge by &amp;quot;introspecting&amp;quot; on its orm: assuming another person P has access to'the same kinds of information ns the 01, P might be expected. to makc some of the sanlc inferences. the 81 does.</Paragraph>
    <Paragraph position="14"> Since the 01 preserks n logical connecti.~:ity mng all its inferred structures (by the PJL~SOXS and OFFSPRT?6 properties of each structure), after inferences of othca types ha,ve arisen fro111 so~ne mit of information U, the C?! can retbrn, determine \iho the original fact U, locate U's OFFSPI?I?PG (those other nemory structures which arose by inference from U),, then infer that P may also be asare of each of the offspring. hs with btDTIVATIOW inferences (~\~hi.ch rely on the WSULTATIPE inf crcnces from a structure) , mOlr;LT.I)GE .PROPACLATION inferences are implemented in the procedure POSTSCAS- whicl~ runs after the initial breadth- f ir'st inference expansion by the monitor.</Paragraph>
    <Paragraph position="15"> ?.lodeling o thcrs ' ho~cledgc denancls n rich klo\iledgc of .what is normal in Tile ~corld. (&amp;quot;does John Smith how that kissing is n sitgn .of gffcct ion?&amp;quot;) . In fact, all inferences must rely upon default ossunyti~np~ ahoutit nonylit?;, sincc most of the Q? s .ho~~~ledgc' (and presumal~ly a. laman' s) csists in the folm of general ~atterns, rather than specific relations among specific concepts and tokens. The next lass of .infcrcnce 111yl-anents my l~elief that patterns, just as inficrences, should bc rcalizcd in 'the 01 1,). - active rrog~nms rather tlla11 by rxissive dccltlrat ivc data structures.</Paragraph>
    <Paragraph position="16"> 5.10 CLASS 12 : NORMATIVE INFERENCES PPINCIPU: The 01 lnust mke heavy relia~lce.upon programs wnlnl, encode comonsense pattern inionnation about the modeled ~iorld.</Paragraph>
    <Paragraph position="17"> Wlcn the retrieval ~d a sought-after wit of infonilation fails, the relevant normality program should be cxccuted on (pattern applied to) fiat infohnation to asscss its likelihood in 'the absence of cxpl icit inf orhat ion ..</Paragraph>
    <Paragraph position="18">  There are several lo\(-level information retrieval p.roceclures in the 01 \;i~ich search for explicit i~~folmat ion units as Girccted by spccif ic ixferellce inolecules. Sucll sci+l.ches are on the i?nsl s of - forn~ alone, and successesresult in precise matches, while failures are total. If there were no recourse for such failures, the Cll would quickly ~rind to a halt, being unable to makc intelligent assumptions There must be so~s nnre positive and flexible n:echanism to -ameliorate &amp;quot;syntactiLt'-. Lookup failures. In the Of, Ellis ability $0 ,make intelligent assumptions is ip?,lementi ed by having the lo\\*-level 1-oolup procedures defer control to the appropriate normality molecule (N-molecule) which will perform systemmatic tests organized in single-responw discrimination nets, to the unlocatable in^ formation. The goal is to arxive at. a terminal node in the. net d~cre a real number between r and 1 is 1-ocated. 11 swc sequence of tests leads to such a number, the K-inolccule returns'. it as the asscsscd likeli hood (&amp;quot;compatibility&amp;quot; in fuzzy logic t.eiminolo~.)' (ZI),) sf X beins true. Nthough the test in the N-molecules are themselves ciiscrete, they result in the fuzzy compatibility. The point of course is that the tests can encode quite diverse ad very -specific heurist.ics peculiar to each small domain of patterns; For instance, based on hbwn (or N-molecule infcrrab1e.-one N-molecule can call upon otherq in its testing p.rocess!) features of either Jo)m or the hammer, we ~voulcl suspect athe compatiba?lity of each of thc follo\~ing four conj ectures. to form a decreasing sequence :  1. John Smith owy something. (very likely, but dependent on his age, society in which he lives, ctc .) 2 John Smith oms a hammer. (probably, ':ut potentially related to featurcsaof Jolin, such as his profession) 5. John Smith oms a claw hammer with a \iooclen handle.</Paragraph>
    <Paragraph position="19"> (maybe, but again dependent on featurcs of John and models o'f hmcrs in general -- i .c., IIOK likely is any given ,hanmer to have a claw and icoodcn l~andle?) 4. Jolm Smith owns a 16 0:. Stanley clau hmcr ~ith a steel-reinforced \cooden hnndle and a. tack puller- on the claw.. (likelihood is quite low unlcss the K-moleculc can locate some specific hints, such as that</Paragraph>
    <Paragraph position="21"> Jolln ~suall~~bui's~o~d equipment, etc . ) A successful N-molecule assessrn6nt results in the* creation of .the assessed information .as a per~~anent\l, eeyl ici t memory set ructure \cl:hose STRISGTI i is the assessed compatibility.. 'This structure is thc normatire inference.</Paragraph>
    <Paragraph position="22"> One is quickly arced by his om ability to rotc (usually quite nccuratc1)-1 comonsensc conjecture s~ic11 as these, and. thc- process sccms usually to hc quite sensitive to features of the entities invol~cd -n tllc conjecture. It is my feeling that importahT insights. can bc gni~lecl via a ~fi~rc thorough tt investigation of the normative inference&amp;quot; process -i 11 huntms .' A.not11cr role of K-molcculcs is' menr ioncd in (111) ~ith rcspcct to the infercncc-rcfercnce cycle I vill dcscril&gt;c shortly.</Paragraph>
    <Paragraph position="23"> 1. 9 sho~ the substance of a prototype N-molecule for assessihg dependent). structures of the form (0hX 1' )ob (person I' oms object X ).</Paragraph>
    <Paragraph position="24"> s P a rneniber. of a pure cobimunal society, or is it an infant? if so, very unlikely that P puns,X otherwise, does X have any drstinctihfe conceptual features? if so-,..assess each one, form the.product-of likelihoods, ahd call it N. fl will be used at the end to mitigate the li.kelihood whlch uould normal lgabe assigned.</Paragraph>
    <Paragraph position="25"> is~X l iv'ing? if so, is X B person? i s Pg a slave ouner: and does X possess character / $ t i cs of a slave? if so, likelihood is IOU hut non-zero otheruise likelihood is zero otherwise, is X a non-human animal Or a plant? if so, is X donrestic-in P's culture:! if so, does P have a fear of X's or is P al lergic to X's of .this type? if so, likelihood ys 101.1 otherwise, Iike'laihooci is-moderate otherwise, is X related to actions P. does in any special way? i f so, l i ke l i hwd i s bow, but non-zero o therw i se,- 1 i ke l i hood i s near-zero otherwise, does X have a normal function? if so, does P clo actions like this normal f.unct,ion? (Note here that we aould want to kook.&amp; P*s profession, and actions commonl~ as60ciated with that profession,) if so, Iik'elihood is nioclerateli high  otherclise, is X a conlrnon personal' item.if so, is ist s value ui thin P'niedns if so, likelihood is high if not, likelihood is low, but non zero otheruise, is X a common household. item? i f so, i s P a homeowner? i-f so, is X within P.'s means? if so, likelihood'is high otherwise, likelihood. is moderate  about themcpecterf (fuzzy] cluration of an arbitrary state so that informattion in the CM car1 be kept up to date. LXAhPLES: **~ohn handed Mary the orange peel.</Paragraph>
    <Paragraph position="26"> {tomorrow' IS Wry still holding the orang6 peel? (inference) Almost certainly not.</Paragraph>
    <Paragraph position="27"> **~ita ate lunch a half hour ago, Is she hungry yet? (inference) Unlikely.</Paragraph>
    <Paragraph position="28"> IIISCUSSION: Time features of states relate-in critical ways to the likelillood those states will be true at s'ome given time.</Paragraph>
    <Paragraph position="29"> The thought of a scenario ~cl~ereln the 01 is informed that hhry is holding an orange peel, then. 50 years &amp;quot;later, uses that information in the generation of some other inference is a bit unsettling! The M must simply possess a low-level function whose job it is to preciilct mornal durations of states based on the particulars of the states, and 'to use that informatim in marking as &amp;quot;terminated&amp;quot; those states whose 1 ikelihood has diminished below some thresl~olld . hly conjecture is that a human notices and updates the temporal truth of a ;tate only when he is about to use it in some cognitive activity -- ' that most of the transient howledge in our-heads is out .of date Wtil \ie again attempt to use it in,- say,.some inference. Accordingly, before using any state information, the CM first filters it. througlt the STATE DURATIOIt inference' proccss to arrive at an updated estimate of the state's likelihood as a .function of its known starting time (its TS feature,-in CD notation-) .</Paragraph>
    <Paragraph position="30"> The *lenlentation of this process in the Q! is as follows: an (NDLTR S ?) structure is constructed f6r the state S who% duration-is to be predicted, and this is passed to the NDUR specifier molecufe: The NDUR S-molecule applies discrimination tests on featur~s of the ~bjects involved in S. Terminal nodes m tne net are duration concepts (typically fuzzy ones), such as #ORDERliOUR, #ORDERY&amp;U. If a terminal node can be successfully reached, thus locat-ing such a concept D, the property MNU: (Cltnractcristic time-.function) is retrieved from D's property list. C1I.W is a step function of STRE;\GTH vs. the amount of time some state has been i~ existence (Fig. 10). From this function a STREhrm is computed for S and bpcomes S's. predicted likelillood. If the STREVGTl1 turns out to be sufficiently low, a (TI: 3 ?ow) structure is predictively generated to make S's 101~ likelihood eQlicit. The STATE.DURATIOX inference thus acts as a cleansing filter on state infomation rd~icll is fed to various other inferehce processes.</Paragraph>
    <Paragraph position="31">  PRINCIPLE: bhny inferences can bc bsed solely on commonly sbserved or learned associations, rather than upon &amp;quot;logical&amp;quot; re: lations such as causation., motivation, and so forth. In a rough way, we can compare these inferences to the phenomenon of visual imagery which constructs a &amp;quot;picture&amp;quot; of a thdught ' s surrounding environment. Such inferences sllould be made in abundance.</Paragraph>
    <Paragraph position="32"> EMlPLES,:, **Andy's diaper is wet.</Paragraph>
    <Paragraph position="33">  (inference) Andy is a youngster. (FEITL!) '** Joh was on his way to a masquerade.</Paragraph>
    <Paragraph position="34"> (inference) John \&lt;as probably wearing a costumeD. (SITUATIOS)</Paragraph>
  </Section>
  <Section position="14" start_page="20" end_page="20" type="metho">
    <SectionTitle>
DISCUSSION:
</SectionTitle>
    <Paragraph position="0"> Jbny llassociativel' inferences can be made to produce nel*. features of' an object (or aspects of s situation) from known features.</Paragraph>
    <Paragraph position="1"> If something wags-its tail, it is probably an animal o'f some sort, if it bitesethe mailman's leg, ir.is probably a dog, if it has a gray beard and speaks, it is probably an old man, if it honks in a distinctive way, it is probably some sort of vehicle, etc. These classes are -?nT~ereiltl-y unstructured, so I will say no.more about them here, except that they frequently contribute fea-* tures which help clear up reference ambiguities and initial reference fail-. ures.</Paragraph>
    <Paragraph position="2">  5.13 CLASS 16 ; UTTERANCE INTENT INFERENCES PRINCIPLE : Based on the way a thought is conununicated (especially the often telling presence or absence of information) , inferences cm be made about the speakerf s reasons for speaking. LWLES : . .</Paragraph>
    <Paragraph position="3"> **Don't eat green gronks.</Paragraph>
    <Paragraph position="4"> (inference) Other kinds of gronks are probably edible **Mary threw out the rotten part of the fig.</Paragraph>
    <Paragraph position="5"> (inference) She threw it out because it was rotten.</Paragraph>
    <Paragraph position="6"> **John was unable to get an aspirin.</Paragraph>
    <Paragraph position="7"> (inference) John wanted to get. an aspirin.</Paragraph>
    <Paragraph position="8"> **Rita like thexhair, but it was green.</Paragraph>
    <Paragraph position="9"> (inference) Tl~e cae'% color is. a negative deature to. Rita (or the speaker).</Paragraph>
  </Section>
  <Section position="15" start_page="20" end_page="20" type="metho">
    <SectionTitle>
DISCUSSION:
</SectionTitle>
    <Paragraph position="0"> I have included this class only to represent the largely unexplored domain ot interences &amp;am from the way a thought is phrased. The 01 will eventualiy need an explicit model of conversation,. and this model will, incorporate inferences from this class. Typical of such inferences are t-hose, which translate the inclusion of ~eferentially superfluous features of an obj ect into an implied causality relation (the fig example) , those which infer desire from failure (the aspirin examplej those which infer features of an 0,rdinary X from features of special kinds of X . (the gronk exq?le), and so .forth. These issues will lead to a more,goal directed-model than J am current1.y exploring.</Paragraph>
    <Paragraph position="1"> 6, Summary of the ~nr'erence -- Component -1 ilavc 110~ sketched 16 inference classes \\rllich, I conjccturc, lie at the corc or thc hrni~lm intcrcnce reflex.</Paragraph>
    <Paragraph position="2"> The central 11ypotl1esi.s is t 11at a lnmlcin lxlg~iagc comprchc~rJer pcrl'o~n~s 1110 1-c suhoo~~sc ious comlmtnt ion. on 1 ;c:un i ng st ruc turcs tlim ally other thcory of lan~,u~~c comjlr-clrcns i on has vet ;isLlovlc;lgccl. \\hen the currcnt 01 is turncd loose, it vill often gcnercltc u;.\,.ll.ds of 100 infcrcnccs EUroc! a l'nir l'y I~;ulcll stimulus swh as &amp;quot;. Jolm .a\-c '.&gt;11*&gt; t!lC hooL.&amp;quot; Id~ilc host ;II-C i ~.ref-~'ut;il)le, tllcy :!rc for the lqost 1l:~l-t  ..yi it~ m~J.inc. anJ &amp;quot;~lint crest in~&amp;quot; to a. cr i t iknl i~ul!:ln olwcr~cr, :mrl :I re, :i Strl-</Paragraph>
  </Section>
  <Section position="16" start_page="20" end_page="20" type="metho">
    <SectionTitle>
1 I
</SectionTitle>
    <Paragraph position="0"> tiic fact, tc-1. But C!IRII~C th~ contest ;~cd tl?~ 11:;n;ll !~cCo:omc~ sal icnt - cl-cll iruc ial - - i 1 11 i 1 i? cc ! 1- Gl11 ScC IlO otl'lcl' ;.ycnallis::. fol ltlilling contc'xtl~:il.inte~ictiop of inro~m;ltion th:~~ this s; on t ;incouS, subsonsc i ous g-op i ng .</Paragraph>
    <Paragraph position="2"> A:, 2, 4.' o ill intcrcst in!; ~IIS~*.C'IICC CS:ISS~S 1 c iy,norcd or lC. Lilt 1 fcc1 t!l~ iiu?!lj~r js not largc, a112 tl:;it otllcr clnsscz \:ill .-;:~!*;*.'i't to thc' ~;~:ii' 5OlStS C&lt; ~!'~t~~il;it izat ion ns Llc.;cribc~! !lcrc.</Paragraph>
    <Paragraph position="4"> 1-olit ioiial ;IC~C~B. It is 3 current ch:~lle~.gc tp find ~11511 a restricted, yet intcrc~~ting, rioxain to \i!lic!l thcsc idcns can !?c tr:ms:--1antcJ 2nd ;qlplicd in  (corlccpt or tokcrl in incr!lor!*) o F ;I l;~nguq:i.. construct ion [noun group, 1-ronoun, ctc. ) . Yct or1 :ittcnt ivc 1 i stcncr sc ldonl k1.i 1:; cvcnt~utl ly to identify t!le intcndcd i-cfcrent, and hc rcill sclrlo111 losc inro1pl;~tion becausc of the rcfcrcllcc delay.</Paragraph>
    <Paragraph position="5"> I:urthcn:~orc, incorl-cct. I-c ~CI-CIIC~ tlcc i s ions arc en~:~iric;~ll!' t'cli and ihr l?ct\iccn.</Paragraph>
    <Paragraph position="6"> 1 hcl ic~c th:rt t!lcsc -pl~cnonsnap ;lrr int iln:~tol?- rclatcd to tllr i~lrcrcncc rcflcs.</Paragraph>
    <Paragraph position="7"> In t1:c C.1, initial rcfererlcc :~ttcnq,ts :Ire n!:ldc Tor concepts and to1:cns iro~a Jescript ive sets - - *collect ioris 05 fconccptu:~l fc:~ t urrs i. lcancd from i111 rittcrancc by Ri cshcck s conccp tuaI annlyier (I&lt;Z), I ip . 11 ill us t rates tllc llescriptive sct for thc &amp;quot;tRr big i-cJ Jog ~ho :~tc the 1)ircl.&amp;quot; Potent i:ll inel,~oiory conccl~ts :111~l tol&lt;m 1-cfcrentt; :ire it 'i S i 1 ! n intcrsc'ct ion rc;lrd; proccciurc \chic11 locntcs I:?CI;IQ~?~ obi c.c t s $.-Ilosc fcn tul-e* silt i s r?~ :I 11 t l~c unique identification of somc memory entity,. (11) a i'uil~lre to 10c:ltc on)- null satisfactory cntitics, or (c) a sct of'candidntcs,,one or rihicll is the probable referent. Case (a) requires 110- decision, but (1)) clnd [c) do</Paragraph>
    <Paragraph position="9"> identically. In case (c), ld~cre a sct of candidates can 1)e locat'cd, T receives the set of features llVinn in the intersection of a11 candidates' occurl'enec sets '(this will I3c at 1.eAst tllc dcscl'iptivc set). In either case, the 01 then llas an internal token, to 4i0,rk ai th, :~llow ing tllc conCcptuaf graph in bhicb references to it occur to be i-.cntativcly Integrated into Tl~c infcl-cncc ref 1 cs I have described thc11 .gencratc.s ;ill t hC ~;ir jws infercnccs , and eventual 1.)- r~gul-ns to its quicsccrlt st3 tc. Onc 11&gt;?&gt;1-0dLlc.t of tile infcrcnc ing is that -thc occurrence zct of cach' memory obj cc t involved in thc oiiginnl- structures r\i 11 cmcrgc \\i th a possjl~lc cnhanccd- occurscnc'e set rihich may contain inferred infonnatio~l sufficient citl;cr (lj to idcit i 1-y thc *tc~iq?ol-a~-y tokcn of cotcgor). (b) cll~o\-e , or ( 2 ) to nnrl-or, thc sct o'f snnciidntes cls.ioc; 1:lecJ ici th the tcmporary tohcn of cntcgor?*</Paragraph>
    <Paragraph position="11"> an example of. a desc-riptive set.</Paragraph>
    <Paragraph position="12"> fully* to exactly one) .</Paragraph>
    <Paragraph position="13"> Tlius , w11e11 tlic inference rct-lex has ceased, the re- applies the 1-efei'ence intersect ion algor i'rhms to cnch untdcn.t i f ied :okcn to seek ~ut any inference-tlhrif,icd refcrenccs. Successful idcntifjrcations. at this ppint result in thc merging (by the ~sao structure merger mcntioncd earliery of the tempmar;\- token's occur.rcncc set with the identified tok~n's occurrence set, thus preserving all informa~i~n collected to that point about the tcmporar)- token. (Implicit in the merge opcrat ion is the subst-itution or of - all references to the. temporary tokc11 by references to-the identified one. ) If, on the1 other -hand, the results of inferenc ing serve only to narrow the ~andiclate set of case (c,) , the occurrence seas of the rcmaip.lng candidates are re-intersected, and :if this inc~eases the size of the set) the set is reattached to the tempoyaly tok- In citlicr case progress Ims been madc.</Paragraph>
    <Paragraph position="14"> No\( .comes a key point. 1-1. any referents were in fact idcnti fied on this second attempt (makingetheir entire occurrcncc sets acccs~ib~c), or if any candidate set clccreasesa caused ne\G features to ix: associatcd rtith the temporary tokens, then there is the possibiiity that. more infcrenccs (~si~ich can lie usceof the nerdy -:lccessible features) cub. l)e 11lade. Ine C! thus re-applies the inference reflesato all memory structures lclshich im-c produced on the first pass. (The manitor is condit ioncd not to dupl icatc work already done on the first pass. ) But a potential byprntluct of thesecond pass is further feature gencrat ion l&lt;nlcn c:ul again ~:estr ict candi date sets or produce pr~sltive idgntifications. This infcrencc-refcl-cnce interaction can proceed bnti 1 no ners narrouings or ident if icati,ons occur; hence thea term &amp;quot;relaxation cycle.&amp;quot; Pig. 12 illustrates TKO cxamples 'DL this phenomenon rchich are handled 'by the current 01, cuid :\gpcridlx jc sontains the computer trace of the second example.</Paragraph>
    <Paragraph position="15"> EXAMPLE. 1 Andy Rieger is a yoUngst,er,</Paragraph>
    <Paragraph position="17"> more new information, but this time about #PETE17 8, Word Sense. Promotion and, Implicit Concept _ _ Activation in the C~nceptua~l Memory ihothcr ~yproduct of the generation 01 an abundance of probabilistic conceptual patternss from each input is that many related concepts and tokens implicitly in~+olved* in thc situation are activated, ora ltouched .&amp;quot; This can be put -to use in two rpys.</Paragraph>
    <Paragraph position="18"> First, %mplicitly touched concepts can clarify 1%??1at might otherwise be an utterly opaque subsequent teference. If, for instance, someone says (outside of a parvicular context): &amp;quot;The nurses were nice&amp;quot;, you will probably inquire &amp;quot;\tIlat nurses?&amp;quot; If, on the other hana, someone says : ''John \ias ~un over by a milk truck. \\%en he woke up. the nurses were nice&amp;quot; you will experience neither doubt about the referents of '&amp;quot;the~urses&amp;quot;, nor suvrise at their mention. I presume that a -subconscious filling-out of the situation &amp;quot;Jolm \\;as run over by .a milk truck!' implicitly activates an entire set of coneptually relevant concepts, &amp;quot;prechnrging&amp;quot; ideas of 110spitals and thdr relation ro patients.</Paragraph>
    <Paragraph position="19"> Other theories foundcd more on. concept associationism than conceptual inference have suggested that such activation occurs through ~ord-~iord or concept-concept free associ,ations (see (A2) and (91) for instance) . hhile these more direct aseociations play an undoubted role in many language functions, it is my belief that these straight asspciative phenomena are not fundamentally powerful enough to explain the kin2 of language behavior underlying the nurse example.</Paragraph>
    <Paragraph position="20"> It is more often than not the. &amp;quot;gestaltt1 meaning context of an utterance which restricts the kinds of meaningful associations a human makes.</Paragraph>
    <Paragraph position="21"> In contrast t~ the nurse example above, most people would agree that the reference to &amp;quot;the nurses&amp;quot; in the fol1'01cing situation is a bit peculiar:, In the dark of the night, John kid ~iallos~ed through the knee-deep mud to the north \call of the deserted animal hospital.</Paragraph>
    <Paragraph position="22"> The nurses were nice.</Paragraph>
    <Paragraph position="23"> A simple hospital-nurses associationn~odel cannot account tor thls. Un the other hand, those concepts touched by the more restrictive conceptual inference patterns would presumably be quite distant from the medical staff of a hospital in this example, thus explainirig the incongruity. Relat-ed to this idea of concept activation tlnough conceptual infer ence structures is another mechanism wllich, I -presume, underlies a comprehendersr ability to select (almost unerringly) the propeF senses of words in comext dvr-ing the 1:nguist ic malysis of each utterance.</Paragraph>
    <Paragraph position="24"> This mecl~mis~r is frequently called -- word sense proribtion, and its exact nazure is one of the major ~~~~~~s of language analysis. It underlies our ability to avoid -- almost totally -- backing up to reinte'tpret words. It is as though at each moment during our comprehension Ice possess a dynamically shifting predisposition toward a unique sense of just about any word we are likely to hear next. Fig. 13 contams some illustrations of this phenomenon.</Paragraph>
    <Paragraph position="25"> I have ~nly a thought (which I plan to develop) on this issue. At each instant in the Of, there is a porcerful inference momentum which is the product of conceptual mferences. -Obviously, these concepts rillich the inference patterns touch will correspond to senses of -- \cords. These senses can be &amp;quot;promotecl&amp;quot; in the same way in-tpl-icit activa'tion promotes certain referents. This is a partial expl-anation or word sense promotion. Suppose, however; that in additio~ the 01 had an independent parallel process wl~ich took each inference as it arose and-mapped it back into a nearlanguage &amp;quot;proto-sentence&amp;quot;, a linear sequence of concepts which is almost a sentence of the lmguage, except that the actual word realizates of each concept have not yet been chosen.</Paragraph>
    <Paragraph position="26"> In other words, a generation process (see (Gl) for exam~le) would be applied to each inference, but would be stoppeci short of the final lexical substitutions of word senses.</Paragraph>
    <Paragraph position="27"> By precnarging all tne senses of the various words ~+llicll could be substituted in such a proto-sentence, the 01 would, have a word sense &amp;quot;set&amp;quot; which ~coulll be a function of the kind of restrictive inferential context ~dlicli I ice1 is so vital to the prQcess of analysis.</Paragraph>
    <Paragraph position="28"> This. idea is obviously cb~puf at ionally exorbitant, but it might model a very real mecl1anisnl. he often catch ourselves subvocal izing \+hat we expect to hear next (especially --- - .-.. 1 istening to an amyingly slow speaker), and this is tantalizing evidence that soncthing like a proto-sentence generator is thrashing about ups.t:tirs.  uhich are the various structures days each inference back into nr i ght he e~pressed pro to-sent,ences by I dnguage.~ These I nvo l,ve many a l tern? t i ve uord senses..</Paragraph>
    <Paragraph position="29"> hlappi~ig i~~fereiltes back illto proto-sente~~ces, activating Inally word selises. Mapping inPS erences back into proto-sentences, activating many word senses.</Paragraph>
    <Paragraph position="30"> 9. . Conclusion . - - -- Any theary of -language nust also be a theory of inference and memot-y. It does not appear to bc possible to &amp;quot;understand:' cvcn the sinll~lcst of uttcrallccs in a contextwll}: meaningful \say in a system in \\.i~i~h language fails to interact wit11 a language-free belief system, or in a system idlich lack n spontaneous inference rcflcs.</Paragraph>
    <Paragraph position="31"> One \;cry knportat thcorctical issue concerns csnctly ]low ~nuch &amp;quot;inferelice energy&amp;quot; is expended. before the fact (prediction, expectation) versus !IOW 111uch is expejded after the fact to clear up specific yrobla~a of hov: the utterance fits tile context. ?ly belief is tlmt there is a great deal 01 cxplofa'to~y, essentially undirected iinfercncing which is frequently ovcrlookcd and dlicll cannot be repressed because it is the language-related manifestotion of tile ~~~uch broader nlotivationtll structure of the b~ain. I1athe.r than argue at an unsubstanriatable neuropl~ysiolog i cal 1 eve1 , I ]lave c'onlp i led evidence for this hypotllcsis ~sithin the domain of llmguage. I beli~\~e, ho\i7ever, that spontaneity of inference pervades all other modes of pcrce1:tion as well, and that quantity -.-as much as qualitv #- -- of spontaneous inference is a necessary requirement for general intelligence.</Paragraph>
  </Section>
  <Section position="17" start_page="20" end_page="20" type="metho">
    <SectionTitle>
APPENDIX A: CAUSAL .CHAIN EXPANSION ~OMPUTER EXAMPLE
</SectionTitle>
    <Paragraph position="0"> John propelled his hand toward Bill John's hand came into physical contact u;th Bill Because i t Llas. prop l l ed, the physical contact' was probab I y forceful Bill probably suffered a negative change in physical state Because Bill suffered a ne ative-change; and Mary felt a nkgativa enlotion toward 8i l l at the tinla, nary might .trave exper i'enced a posi ti v6 change in joy' Becausa Hary may have exper i enced th i s posi t i ve, change, a,nd becanuse i t was John uhose act I on. indirect l y caused her p.ogi t'i ve change, she. might feel a posi t ivye emot i o,n toctard John  Th'is is the'partially integrated ntenlqry structure, asfter references have been established. No reference anlbiguity ,is asSuhed. to exist for this example.</Paragraph>
    <Paragraph position="1"> C0035 is the resu! ting mehlbry.structur-e for this utterancp.</Paragraph>
    <Paragraph position="2">  --------We suppress al! but this structure on' the starting inference queue.</Paragraph>
    <Paragraph position="3"> (sue wi ' l be see: ng 'about one. fourth of Y he original trace output fcp this example)  Here, the CAUSE inference nrg l ecu.1 e i s injecting the two subconceptual.izations, A and 8 In Fig.. 1 into the, inference stream.</Paragraph>
    <Paragraph position="4"> The causal structure of this conceptual~zatian indicate'd that a path should.be found relatina strycture C to structure 0 in Fig. 1 This is noted.. C0024 corresponds to C, ~~32 to D.</Paragraph>
    <Paragraph position="5">  --------Here, the causative infsrence that Mar s ki,ssing uas probably cau.sec1 by her fee Y' ing a positive emotion touard- John 4s made.</Paragraph>
  </Section>
  <Section position="18" start_page="20" end_page="20" type="metho">
    <SectionTitle>
1 ON
'ION
</SectionTitle>
    <Paragraph position="0"> I Here, because r13r was. fee 1 i ng a nega t i ve Y emotion touard 8i I at the tirtle, when Bi l l underwent a smd l l NEGCHARGE ; the pr ed i c t i on can be made that Maru mau'.have ex~er ienced a degree of joy.</Paragraph>
    <Paragraph position="1"> Looking back the causal path which lead to Maril's likelg change in jog, the POSCHARGE inference nio l ecu l e. d i scclver s that it .was an action on John's part which wag nrost directly responsible for her joy. The'inference that Mary niight have started feeling a positive eniotion toward John is niade.</Paragraph>
    <Paragraph position="2">  --------As this last inference is ,mada, the infe~ence evaluator notices that the same inforniation exists elseuhera in the memory. This is a point of contact in inference space. It is furthermore noticed that the t~lo flFEEL structures. join a causal path bgtween two,struc t,ures which have been rejated causal ly -by language. T.he two tlFEEL structures are Oierged into one, and this event i noted as a causal chain expansion, To the Ief t, X0068 and C0039 are the contact points, C0024&amp;quot;,and C0032 are the two structures which have nou been causally related.</Paragraph>
    <Paragraph position="3"> -------..</Paragraph>
    <Paragraph position="4"> Inference proceeds, and fjnal lg stops. At that point, 1.le took a look at the structures I ing along this explained causal path.</Paragraph>
    <Paragraph position="5"> ~b2s is the original PROPEL stricture GO032 is the PHYsCONT-li s s.tructure. the serv i ce f unc t i on--CAUSAL- 6 ATH wmi l l track down the causad l i nkage for -us. Ths causa l cha i n consists of .the six stcucture? to the left.  --------This is -the original PROPEL. During the process, but not shown, C0048 was detected as uns ecified', and filled irin4as John's hand. hot! ce on. the REASONS and OFFSPRl NG semis the results uF' other inferencing uhich was not discussed above.</Paragraph>
  </Section>
  <Section position="19" start_page="20" end_page="20" type="metho">
    <SectionTitle>
OFFSPRING:
</SectionTitle>
    <Paragraph position="0"> here is the FO9CECCINT uhich uas inferred from the PROPEL.</Paragraph>
    <Paragraph position="1"> This is Bi 11's l ikelg (smal 1') chan e in PSTATE which resul ted from 'the FOR ECUNT. t! This is ihe inpgrtant inference that Bi l I' s NEGCHAliGE may have cause a sma l I degree of ha pinsss in-flarg, Notice that one .of the R ASCX was assunled to be the  Here, Mary is feeling a positiue.en\otion C0068: FEEL* PRARYl #POSEHOT ION toward John, uhose act ion i nd i rec t l g caused #JOHN1 1 her jog, This structure is the point of ASET: contact, and is the structure which resul ted C06155:. (WANT ,#JOHN1 '171 fr'orn the n~erge. Elstice that its STRENGTH  This WqNT is a prediction that one reason Mqr may have k i ssecl John i s so that he uou d knob1 she felt,a positive motion Y toward him.</Paragraph>
    <Paragraph position="2"> This ULOC represents the inference tbt John probably nolrl knows that Mary MFEELS a bositiwe enlotion toward him.</Paragraph>
  </Section>
  <Section position="20" start_page="20" end_page="20" type="metho">
    <SectionTitle>
APPENDIX B
INFERENCE-REFERENCE RELAXATION CYCLE, COMPUTER EXAMPLE
</SectionTitle>
    <Paragraph position="0"> This computer example illustrates reference-inference, reference-inference inieraction (two inference passes). Hearing the input &amp;quot;8111 saw John kiss Jenny.&amp;quot;, MEMORY-is unable to decide upon the referent of &amp;quot;Jenny&amp;quot;: it could be Jenny Jones or Jenny Smith. MEMORY therefore creates a temporary token hav~rlg as features all the common features of Jenny Jones and Jenny Srnlth. By inference, MEMORY is able to decide upon Jenny Jones. At that point, the temporary token is merged inlo the concept !nr Jenny Jones, and a second pass 01 inferencinr is initiated.</Paragraph>
    <Paragraph position="1"> However, on the second pass a new inference arises: because Bill loves Jenny Jones, and he saw John kiss her, he (probably) becams angry at John. This inference was not triggered on the first inference pass.because beiing loved by Bill was not a common feature of both Jennys, and hence not accessible then (ie. it had not been copied to the temporary token's occurrence set).</Paragraph>
    <Paragraph position="2"> The example begins with a few lines to set the scene for MEMORY, Inferencine on these setup lines (which is normally spontan~ous) has been suppressed for the sake of simplicity in this example.</Paragraph>
    <Paragraph position="3">  This exaaple illustrates reference-inference, reference-inferencq interaction. That is, MERORY is unable to establish a reference, so i t creates a temporary tbken, and proceeds, wi th. inference; I nferenc I ng generates new infornation. which solves the 'reference, so more inferencing can be under taken. However, becal~se features of the referent are accessible~on *the second inference pass.</Paragraph>
    <Paragraph position="4"> ,neu inferences are possible, ____________------- I To the left, NmORY is reading in some  i~formation uhiih is relevant to this demonstrat ion. Each of these inputs uould normal ly pr-oduce inferences as i t i's processed, but inferencj ny has been suppressed for ' th8 first four'sentences o'f this example. The four sentences are shown uith their ,partial inte rations and final structures; CW2, ~0008, C0008, C001 R.</Paragraph>
    <Paragraph position="5"> The synopsis of this short plat is as fol lows: There are, two Jenn s: Jenny Jo~es and Jennb Smi th. Bi l l loves Y enny Jones, John and Jenny Jones w6re in Palo Alto yeste day, Jenny Smith uas in France yesterday. The $ l i .ax-bornes when Bill sees John kiss Jennd. It is MEMORY'S job 20 figure out which Jeriny. MEMORY wi l'l decide upon Jenriy Jones, then re- i nf e~ence and infer that 81 11 probably got-angry at John-- something uhi ch l~ou l dn t have. happened if Bi 1.1 had seen John kiss Jenny Smith.' To the left, the climan line is in the P recess of being read and internalized.</Paragraph>
    <Paragraph position="6"> ts final structure is C0031. Notice that C0015 uas created to stand for some jenny, and that all common .features of the tuo enny  This is the person nirned Jenny who Bil,l saw yesterday, and who 'John )s I ssed. C0012</Paragraph>
  </Section>
  <Section position="21" start_page="20" end_page="20" type="metho">
    <SectionTitle>
ASET :
</SectionTitle>
    <Paragraph position="0"> is the- token re resenting John's .I ips, which C0023: (*LOOK AT* #RILL1 .#) uere in *PHYSCOI ! T* u i th thi s person named  -----cn--MEMORY t5egi ns i nf erenc i-ng Prom t h is s 'i npu t.. The start~ng inference queue consists of.</Paragraph>
    <Paragraph position="1"> the' nrain structure for the sentence, together with all ottier 'facts known about C0015. In  a person, 'and that i,ts name is Jenng. Ihese wi.1 l not be of use in this example. A1 1 other subprop~si t ibns have been suppressed from the s t-ar t inq 'i'nf erence queue - for' th i s examp 1 e, One inference fromBill's seeing this event is that he knous that the event oc~urred.</Paragraph>
    <Paragraph position="2"> That js, themevent ~ent froni his eyes to hi s consciaus processor, C8821.</Paragraph>
    <Paragraph position="3"> To the I.eft, the infer.ence that Bi l l knows about John's kissing Jenn i s being enera ted: information in Bi l l s CP Y C0821) UI l 7 also enter his LTfl, C0040. This fact wFI 1 'be of use dur i n .the second pass of ' i'nf erenc i.ng [after ME ORY de'cidbs that CBBIS is Jenny Jones).</Paragraph>
    <Paragraph position="4"> il Another inference ar i ses from John' s l i ps being -in PHYSCONT uith C0015: that John feels a ~ositive emotion touard C0015. THe structurb rep~esenting this inference is  C0849, null Another inference from John's 'kissing action is that' C0015 kno.ws tha.1 John fee l s a posi t i.ve emotion toward C0015, C0051 is C0015's LTM. This inference will beo of no ditect consequeoCe in this example.</Paragraph>
    <Paragraph position="5"> MEflORY also infers from John'*s kissing C081S that John and C0015 shad the same l oca t i on at theevent tima, C0811 (yesterday). Since flEflORY knows that John was i n Pa l o A l to, and has no information concerning C0815's l oca t i on , NEflORY infers that C0015 was also In Palo Ito.yesterday. This information wi 11. solve the reference arnbiguli ty, the\postscan lnterencln , .the fact that i l l saw John kiss C0815 eads to Our nil 9 the inference that. Bi 1 l knaws that John feels a positive emotion, toward C.0015'. This inference type implements the p+i$ciple that if a'person knows X, he also is.likely also to knou the inferences uhich -cansbe drawn from X, That is, MEflORY assumes that other eo 'le possess the same inference ppwars. as</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML