File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/79/j79-1021_metho.xml

Size: 87,192 bytes

Last Modified: 2025-10-06 14:11:10

<?xml version="1.0" standalone="yes"?>
<Paper uid="J79-1021">
  <Title>RECENT COMPUTER SCIENCE RESEARCH IN NATURAL LANGUAGE PROCESSING</Title>
  <Section position="4" start_page="0" end_page="0" type="metho">
    <SectionTitle>
2, LEVEL AND DOMAIN
</SectionTitle>
    <Paragraph position="0"> Current A1 programs for lankuage processing are organized by level and restricted to specified domains. This section presents those ideas and comments on the limitations that they entail.</Paragraph>
    <Paragraph position="1"> Three principal levels of language-processing software are  1. &amp;quot;Lexical&amp;quot; (allowed vocabulary) 2. &amp;quot;Syntactic&amp;quot; (allowed phrases or sentences) 3 &amp;quot;Semantic&amp;quot; (allowed meanings) ln practice all these levels must operate many times for the computer to interpret even a small portion, say two words, of restricted natural-language input. Programs that perform operations on each level are, respectively, 1. Word in a table? 2. Word string acceptable grammatically? 3. Word string acceptable logically?  A program to detect &amp;quot;meaning&amp;quot; (logical consequences of word interpretations) must also perform grammatical operations for certain words to determine a part of speech (noun, verb, adjective, etc.) One method makes a tentative assigrlment, parses, then tests for plausibility via consistency with known facts. To reduce the complexity of this task, the designer limits the subset of language allower or the &amp;quot;world&amp;quot; (i.e. the subject) discussed. The word &amp;quot;domain&amp;quot; sums up this concept, other terms for &amp;quot;restricted domain&amp;quot; are &amp;quot;limited scope of discourse&amp;quot;, &amp;quot;narrow problem domain&amp;quot;, and &amp;quot;restricted English framework&amp;quot; The limitation of vocabulary or context constrains the lexicon and semantics of the &amp;quot;language&amp;quot;.</Paragraph>
    <Paragraph position="2"> The trend in the design of software for &amp;quot;natural-language understanding&amp;quot; is to deal with (a) a specialized vocabulary, and (b) a particular context or set of allowed interpretations in order to reduce processing time.</Paragraph>
    <Paragraph position="3"> Although computing results for several highly specialized problems Le,g. 7, 231 are impressive examples of language processing in restricted domains, they do not answer several key concerns.</Paragraph>
    <Paragraph position="4">  1. Do specialized vocabularies have sufficient complexity to warrant comparison with true natural language? 2. Are current &amp;quot;understanding&amp;quot; programs, orga null nized by level and using domain reatrictidn, extendable to true natural language? The realities are severe. Syntactic processing is interdependent with meaning and involves the allowed logical relationships among words %n the lexicon. Most natural-language software is highly developed at the &amp;quot;syntactic&amp;quot; level Howwer, the number of times the &amp;quot;syntactic&amp;quot; level must be ent'ered can grow explosively as the &amp;quot;naturalness&amp;quot; of the language to be processed increases. Success on artificial domains cannot imply a great deal about processing truly natural language.</Paragraph>
  </Section>
  <Section position="5" start_page="0" end_page="0" type="metho">
    <SectionTitle>
3, PROGRAM SYSTEMS
</SectionTitle>
    <Paragraph position="0"> The systems cited in this section answer questions, perform commands, or conduct dialogues.</Paragraph>
    <Paragraph position="1"> Programs that enable a user to execute a task via computer in an on-line mode are generally called &amp;quot;interactive&amp;quot; Some systems are so rich in their language-processing capability that they are called &amp;quot;conversational&amp;quot; Systems that have complicated capabilities and can reply with a sophisticated tesponse to an inquiry are called &amp;quot;question answering&amp;quot;. The survey [I] discusses two &amp;quot;conversational&amp;quot; pkograms ELIZA [2, 31 and STUDENT [41, which answers questions regarding algebraic word problems. SIR [51 answers questions about logic. Both [41 and [51 appear in [61 , the introd,rction there provides a general discussion of &amp;quot;semantic information and computer programs involving &amp;quot;semantics&amp;quot; The &amp;quot;question-answering&amp;quot; program systems described in [2-51 were sophisticated mainly in methods of solving a problem or determining a response to a statement. Other systems have emphasized the retrieval of facts encoded in English.</Paragraph>
    <Paragraph position="2"> The &amp;quot;blocks-world&amp;quot; system described in [71 contrasts with these in that it has sophisticated language-processing capability It infers antecedents of pronouns and resolves ambiguities in input word strings regarding blocks on a table.</Paragraph>
    <Paragraph position="3"> The distinction between &amp;quot;interactive&amp;quot;, &amp;quot;conversational&amp;quot;, and &amp;quot;question-answering&amp;quot; is less important when the blocks-world is the. domain. The computer-science contribution is a program to interaet ,wfth the domain as if it could &amp;quot;underktand&amp;quot; the input, in the sense that it takes the proper action even when the input is somewhat ambiguous. To resolve ambiguities the program refers to existing relationships among the blocks.</Paragraph>
    <Paragraph position="4"> The effect of [71 was to provide a sophisticated example of computer &amp;quot;understanding&amp;quot; which led to attempts to apply similar principLes to speech inputs.</Paragraph>
    <Paragraph position="5"> (More detail on parallel developments in speech processing is presented later.) The early &amp;quot;language-understanding&amp;quot; systems, BASEBALL [9], ELLZA, and STUDENT, were based on two special formats: one to represent the knowledge they store and one to find meaning in the English input.</Paragraph>
    <Paragraph position="6"> They discard all input information which cannot be transformed for internal storage. The comparison of ELIZA and STUDENT in [I] is with regard to the degree of &amp;quot;understanding&amp;quot; ELIZA responds either by transfoiming the input sentence (essentially mimicry) following isolation of a key word or by using a prestored content-free remark. STUDENT translates natural-language &amp;quot;descriptions of algebraic equations, ... proceeds to identify the unknowns involved and the relationships which hold between them, and (obtains and solves) a set of equations&amp;quot; [I, p 851. Hence ELIZA munderstands&amp;quot; only a few key words; it transf6rms these words via a seatence-reassembly rule, discards other parts of the sentence, and adds stock phrases to create the response. STUDENT solves the underlying algebraic problem-- i t &amp;quot;unders'tands&amp;quot; in that it &amp;quot;answers questions based on information contained in the input&amp;quot; [4, p. 1351 . ELIZA responds but does not &amp;quot;understand&amp;quot;, since the reply has little to do with the information in the input sentence, but rather serves to keep the person in a dialogue.</Paragraph>
    <Paragraph position="7"> Programs with an ability to spout back similar to ELIZA's usually store a body of text and an indexing scheme to it.</Paragraph>
    <Paragraph position="8"> This approach has obvious limitations and was replaced by systems that use a formal representation to store limited logical concepts associated with the text. One of them is SIR, which can deduce set relationships anong objects described by natural language. SIR is designed to meet the requirement that &amp;quot;in addition tu echoing, upon request, the facts it has been given, a machine which 'understands' must be able to recognize the logical implications of those facts. It alqo must be able to identify (from a large data store) facts which are relevant to 8 particular question'' [51 .</Paragraph>
    <Paragraph position="9"> Limited-logic systems are important because they provide methods to represent complex facts encoded in English-language statements so that the facts can be used by computer programs or accessed by a person who did not input the original textual statement of the fact. Such a second user may employ a completely different form of language encoding. Programs of this sort include DEACON [lo, 111 and the early version of CONVERSE [121. The former could &amp;quot;handle time questions&amp;quot; and used a bottom-up analysis method which allowed questions to be nested. For example, the question &amp;quot;Who is the commander of the battalion at Fort Fubar?&amp;quot; was handled by first internally answering the questian &amp;quot;What battalion is at Fort Fubar?&amp;quot; The answer was then substituted directly into the original question to make ic &amp;quot;Who is the commander of the 69th battalion?&amp;quot; which the system then answered. [7, p. 371 CONVERSE contained provisions for allwing even more complex forms of input questions (Recent versions are described in 113-151 .) Deductive systems can be divided into general systems which add a flrst-order predicate-calculus theorem-proving capability to limited-loglc systems to improve the complexity oE the facts they can &amp;quot;infer&amp;quot;, and proccdurnl systcms which enable other computations to obtain complex information The theorem-proving capability is designed to work Erom a group of logical statements given as input (or statements consistent with the'se input s-tements) However, facts INCONSISTENT with the original statements cannot always be detected and deductive systems quickly become impractical as the number of input statements (elementary facts, axioms) becomes large [b, 7, 161, since the time to obtain a proof grows to an impractical length. Special programming languages (e.g. QA4 [17, 181 , PLANNER [20, 211 ) , have added strategy capabilities and better methods of problem representation to reduce computing time to practical values QA4 (seeks) to develop natural, intuitive representations of problems and problem-solving programs.</Paragraph>
    <Paragraph position="10"> (The user can) blend ... procedural and declarative information that includes explicit instructions, intuitive advice, and semantic definitions. EU171 However, there is currently no body of evidence regarding the effectiveness of the programs written in this programming language or related ones on problem-solving tasks in general or &amp;quot;lapguage understanding1' in particular. There is a need for experimental evaluation of the strategies that the progsahing language permits for &amp;quot;language understanding&amp;quot; problems. Procedural deductive systems facilitate the augmentation of an existing store of complex information. Usually systems require a new set of subprograms to deal with new data: each change in a subprogram may affect more of the other subprograms. The structure grows more awkward and difficult to generalize. ... Finally, the. system may become too unwieldy for further expkrimentation.</Paragraph>
    <Paragraph position="11"> 15, p. 911 In procedural systems the software is somewhat modular In 19 &amp;quot;semantic primitives&amp;quot; were assumed to exist as LISP subroutines. PLANNER 1201 allows complex information to be expressed as procedures without requiring user involvement with the details of interaction anong procedures (but [21] reports some second thoughts).</Paragraph>
    <Paragraph position="12"> The work of many other groups could be added to this survey. Recent work on REL, building on on [lo, 111 is reported in [36, 371; [24, 251 are relevant collections; and [26] is a survey paper.</Paragraph>
  </Section>
  <Section position="6" start_page="0" end_page="0" type="metho">
    <SectionTitle>
4, DEDUCTION
</SectionTitle>
    <Paragraph position="0"> In all of the program systems described thus far, &amp;quot;language understanding&amp;quot; depends on the &amp;quot;deductive capabilities&amp;quot; of the *Some experiments on problem-solving effectiveness of special programing languages in another context appear in [22].</Paragraph>
    <Paragraph position="1"> program, that is, its ability to &amp;quot;infer&amp;quot; facts and rela.tionships from given statements. In some cases deduction involves discerning structure in a set of facts and relatimships.</Paragraph>
    <Paragraph position="2"> This section describes how &amp;quot;imderstanding&amp;quot; prOgrAmS qhemselves are structured and how that structure limits tfheir capability for general deduction.</Paragraph>
    <Paragraph position="3"> Theorem-proving programs use an inference rule illustrated in [23 p. 611 to deduce new knowledge. A formal succession of 1ogi.cal steps called resolutions leads to the new fact. The example there begins with P1 - P4 given:</Paragraph>
    <Paragraph position="5"> x is part of y; P2 a finger is part of a hand; P3 a hand is part of an arm; P4 an arm is part of a man A pr'oof that P9 a finger is part of a man is derived by steps, such as combining P1 and P2 to get P6 if a hand is part of y, then a finger is part of y Unfortunately, it is easy to move outside the domain where the computer can make useful deductions, and the formal resolution process is extremely lengthy and thus prohibitively costly in computer time. In [31, 321 it is shown that some statements (&amp;quot;whol did not write ---?It) are unanswerable and that there is no algorithm which can detect whether a question stated in a zero-one logical form can beb answered. Henc.e theorem proving is not: e-sential to &amp;quot;deduction&amp;quot; and &amp;quot;understanding&amp;quot; systems, natural or artificial, must rely on other techniques, e.g., outside information such as knowledge about Lhe domain.</Paragraph>
    <Paragraph position="6"> In most &amp;quot;understanding&amp;quot; programs, information on a primi tive level of processing can be inaccurate; for example, the identification of a sound string &amp;quot;blew&amp;quot; can be inaccurately &amp;quot;blue&amp;quot; Subsequent processing levels combine identified primitives. If parts of speech are concerned, the level is syntactic; if meaning is involved, &amp;quot;semantic&amp;quot;; if domain is involved, the lave1 is that of the &amp;quot;world&amp;quot;. Sach level can be an aid in a deductive process, leading to &amp;quot;understanding&amp;quot; an input segment of language. Programs NOW EXIST which operationally satisfy most of the following points concerning &amp;quot;understanding&amp;quot; in narrow domains (emphasis has been added) Perhaps tha most importaht criterion for undersvanding a language is the ability TO RELATE THE INFORMA-</Paragraph>
  </Section>
  <Section position="7" start_page="0" end_page="0" type="metho">
    <SectionTitle>
TION CONTAINED in a sentence TO KNOWLEDGE PREVIOUSLY
ACQUIRED. This IMPLIES HAVING SOME KIND OF MEMORY
STRUCTURE IN WHICH THE INTERRELATIONSHIPS OF VARIOUS
PIECES OF KNOWLEDGE ARE STORED AND INTO WHICH NEW
INFORMATION MAY BE FJTTED ... The memory structure
</SectionTitle>
    <Paragraph position="0"> in these programs may be regarded as 3emantic, cognitive, or conceptual structures.,.these programs can make statements or answer questions based not only an the individual statemegts they were previausly told, but also On THOSE INTERRELATIONSHIPS BETWEEN CONCEPTS that were built up from separate sentences as information was incorporated into the structure ...</Paragraph>
  </Section>
  <Section position="8" start_page="0" end_page="0" type="metho">
    <SectionTitle>
THE MEANINGS OF THE TERMS STORED IN MEMORY ARE PRE-
CISELY THE TOTALITY OF THE RELATIONSHIPS THEY HAVE
WITH OTHER TERMS IN THE MEMORY. [28 pp. 3-4)
</SectionTitle>
    <Paragraph position="0"> This has been accomplished through clever (and lengthy) computer programming, and by taking advantage of structure inherent in special proklem domains such as stacking blocks on a table, moving chess pieces, and retrieving facts about a large naval organization.</Paragraph>
    <Paragraph position="1"> Program systems for understanding begin with a &amp;quot;front end&amp;quot;: a portion designed to transform language input into a computer representatiort. The representation may be as simple as a character-by-character encoding of alphabetic, space marker, and punctuation elements. However, a complex &amp;quot;front end&amp;quot; could involve word and phrase detection anti encoding. The usual computer science term fol a computer representation is &amp;quot;data structure'' [271 and there are many types. The language processimg program DEACON used ring sqructures 1111, a representation frequently used to store queues. In principle a data structure can represent involved associations, but in practice simple order or ancestor relationships predominate Completely different and far more complex types of structure are inherent in natural language. For example, from 1281 &amp;quot;The-professors signed a petition.&amp;quot; is not true.</Paragraph>
    <Paragraph position="2"> has for valid interpretations:  The professors di'dn't SIGN a petition: Iterative substitution of alternatives to deduce overall meaning yields cumbersome processing, especially when there are nested uncertainties. The recursive properties associated with the data structure term &amp;quot;list&amp;quot; [271 are not easily adapted to multiple meanings. Hence, representing linguistic data for computation is ah opwr and fundamental researrh problem. Nevertheless, the programs which de~uce facts from language do so withnut a clear best technique for computer representation. To do this, restrictions on the language implicit in the input domain are used, and repeated processing by level (lexical, syntactic, semantic) is used in the absence of an efficient representation language. Data structures that facilitate following the language structure are needed Existing programs provide special solutions to the problems of deductive processing in narrow language domains While these programs are not a general breaktht-ough in reuresenting language data for computation, they demonstrate that current programming t.echniques enable a us.eful &amp;quot;understanding&amp;quot; capability Furthermore, tbere is a reql potential for use ot the &amp;quot;understanding&amp;quot; in an interactive node to facilitate use of computers by nonspecialists and to tap fhe more sophisticated human understanding capabilities</Paragraph>
  </Section>
  <Section position="9" start_page="0" end_page="0" type="metho">
    <SectionTitle>
5 INTERACT I ON
</SectionTitle>
    <Paragraph position="0"> Research and computer program development desrgned to store multitudes of facts so that they can be accessed [29] qx combined [301 and &amp;quot;unders,tood (see pp. 3-10 in [301') in linguistic form (see pp. 11-17 of [30]) is highly relevant to recent research programs in text and speech understanding.</Paragraph>
    <Paragraph position="1"> When such a system is used a user might fail to get a fact or relataonship because the natural-language subset chosen to represent his question was too righ--i.e., it includes a complex set of logical relationships not in the computer. Thos a block could result in a human-computer dialogue if the program has no logical connection between &amp;quot;garage&amp;quot; and &amp;quot;car&amp;quot; but only between &amp;quot;garage&amp;quot; and &amp;quot;house&amp;quot; (the program replies &amp;quot;OK&amp;quot; or &amp;quot;??'!I1 to user input sentences)</Paragraph>
  </Section>
  <Section position="10" start_page="0" end_page="0" type="metho">
    <SectionTitle>
I LIKE CHEVROLETS.
OK
CHEVROLETS ARE Eco~oMICAL.
OK
M.Y HOUSE HAS A LARGE GARAGE.
OK
I CAN GET TWO IN
</SectionTitle>
    <Paragraph position="0"> ??? The computer failed to &amp;quot;understand&amp;quot; that there was no change of discourse subject. This is an example of a &amp;quot;semantfo&amp;quot; failure whi~h could be overcome by interaction. That is; the human user would need to dnput one more meaning or association of a valid word so that computer &amp;quot;understanding&amp;quot; may be achieved. Syntactic blocks may also occur. M. Denicoff pointed out that in [7] 172 different syntactic features were used fox a situation where there are no statements with psychological content and no use of simile.</Paragraph>
    <Paragraph position="1"> If the psychological meanings are added as in 1381, these features would not be enough to describe all the possible meanings of a text drawn from a less artificial soprce. Indeed, a key problem which formal granhars seem ill-suited for is the reality that many contexts may-be sifiultaneousLy valid: multiple meanings give natural-language communication the richness of ovextones, ana subtleties--poetry carries this ta an extreme.</Paragraph>
    <Paragraph position="2"> The above dialogue on &amp;quot;Chevroletsl' is an example of what Carbonell [39, p. 1941 called &amp;quot;mixed-initiative discourse&amp;quot;. This important aspect of interaction is considered in the LISP program DWIM (&amp;quot;Do What I .Mean1'), which is a useful working tool for text-input error correction precisely because it &amp;quot;understands&amp;quot; the user's character~stics . (For example, typical spelling errors .) This is discussed by Teitelman 140, 41 ,, 421 A great deal of effort has been put into making DWI-N '-'smartn. Expkrience with perhaps a dazen different users indichtes we have been very successful: DWIM seldom fails to correct an error the user feels it should have, and almost never mistakenly corrects an error. [40, p. 111 Another limited-discourse interactive program 1431 facilitates introduction of expert knowledge on 7hess. The program uses search with a maxitnum look-ahead depth of 20 and has back-tracking capability; both s,yntactic and semantic knowledge is incorporated. By grouping similar board positions (i.e., all involving a piece on cell 1, all involving a queen moue), it imposes semantic organization on the vast files to be searched and improves syntactic processing speed Bublication of 1443, which coined the term &amp;quot;speech understanding&amp;quot;, initiated,the natural next ,step toward use of the computer's &amp;quot;understandfng&amp;quot; capability. The goal of easy intqraction with the computer becomes more exciting with speech as input medium. Systems to recognize both text and speech have used syntax and context [45, 461, but [471 added a comprehensive approach using multiple processing levels to resoLve ambiguities., In the dkrect successors of this work [ 8, 491, the same Frocess of parriaL acceptance of primitive elements (phonemic candidates from digitized acoustic data) followed by lexical, syntactic, and semantic processing ko rank alternatives has shown significant success. Reddy (in a ,Carnegie-Mellon University film on the Hearsay System) sta-tee hat on 144 connected utterances, involving 676 words, obtained from 5 speakers, performing 4 tasks (chess, news retrieval, medical diagnosis, and desk calculator use), req.uiring 28 to 76-word vocabul~ries, t,he computer program recognition, in terms of words spotted and identified correctly, was a. 89% with all sources of kmm1edge . 67% without use of semantic bowledge c. 44% without use of sptactic or semantic knowledge These results were obtained in October 1973, and have been improved since [501. However, a key limitation of this form of computer speech &amp;quot;understandkng&amp;quot; is response rate. Reddy estimated that the third w~rd-accuracy figure (without use of syntactic and semantic knowledge) would have to be in excess of 90% to allow the program to achieve a near-human response speed.</Paragraph>
    <Paragraph position="3"> The nature of computer &amp;quot;understanding&amp;quot; programs leads to problems of combinatoric explosion in number of alternatives and this lessens the usefulness of multilevel program organization (acoustic-phonetic, lexical, syntactic, semantic, domain, and user interactions) as much in speech processing as in text pro.cessing. Prototype speech &amp;quot;understanding&amp;quot; systems have been build 149, 501 and newer acoustic-phonetic and syntactic techniques have been incorporated into this work [49, 51, 521, yet it seems clear that the development of theory in prosody and grammar cannot provide a breakthrough to escape the combinatoric explosion. The reason is that the search of parse rrees and the use of semantics (look up related words) depend on a single context--both take geometrica'lly increasing amounts of computing time as the number of contexts grows.</Paragraph>
    <Paragraph position="4"> Furthermore, this increase in time is added onto that which occurs when the size of lexicon is expanded. ks words are added, the number of trees that can Be-produced by the grammar's rewriting rules in an attempt to &amp;quot;recognize&amp;quot; a string expqnds rapidly. Hence in speech as in text processing, &amp;quot;under,standihgn1 exists via computer yet it is not likely to lead to rhachine processing of truly natural language. Indeed the artificiality of speech &amp;quot;understanding&amp;quot; by computer is even greater than that of text input. The &amp;quot;moon rocks&amp;quot; text system [33, 351 used a vocabulary of 3500 words, while the speech &amp;quot;understanding&amp;quot; version based on it [5M used only 250 words.</Paragraph>
    <Paragraph position="5"> The COMMERCIAL AVAILABILITY of systems that recognize isolated words with 98.5% accuracy [531* and the need for a rapid human-computer input interface [54] promise that the last word has not been spoken on &amp;quot;understanding&amp;quot;. Research and development on language handling systems is continuing in the hope of achieving useful &amp;quot;understanding&amp;quot;. Indeed, Stanford Research ,Institute's Artificial Intelligence Center is basing its current work on the just-mentioned isolated-word recognizer. It is likeap that useful developments will occur where language, and probably spoken-language, &amp;quot;understanding&amp;quot; will be exhibtred. These developments will occur through careful design of tasks and use of advances in computer technology However, the general problem of machine' &amp;quot;understanding&amp;quot; of natural language- -whether text Dr speech- - is not likely to be aided by these developments.</Paragraph>
  </Section>
  <Section position="11" start_page="0" end_page="0" type="metho">
    <SectionTitle>
7 CONCLUS IONS
</SectionTitle>
    <Paragraph position="0"> A large body of research in computer science is devoted to language processing. A survey of the program systems that *Threshold Technology Inc. has sold such a system to several users. Their VIP-100 includes a miriicamputer dedicated to the recognition task; there are otker isolated-word systems  have been reported shows that two main goals have emerged: 1 To enable &amp;quot;intelligent&amp;quot; processing by the computer (&amp;quot;hrtificial intelligence&amp;quot;) 2. To produce a more useful way to access  daa and solve problems (&amp;quot;man-machine interaction&amp;quot;) Techniques in artificial intelligence and speech recognition have been developed to the extent that prototype computer program systems which exhibit &amp;quot;understanding&amp;quot; have been developed for highly limited conrexts. To extend these programs to larger subsets of natural language poses problens, it is unlikely that any of the yesearch directivns currently being explored will of thewselves &amp;quot;solve&amp;quot; the I1na.tural lan guage problem&amp;quot;. (The techniques include, but are not limited to, further developments in artificial intelligence programming languages [17, 18 20, 21, 551.; refinements in theories of grammar; improved deductive ability, possibly by better theorem-proving techniques; and the introduction of stressrelated features in the ehcoding of speech [52]. A useful collection of language models appears in [56].) Nevertheless, prorotype systems for &amp;quot;understanding&amp;quot; both text and speech are useful achievements of engineering, and spoken entry of data by humans to computers is beginning to be established by isolated-word re-cognizers which use a minicomputer dedicated to the task. A multiplicity of purposes beyond this simple but practical task of data entry are mentioned briefly in the foregoing discussion of &amp;quot;interaction&amp;quot;.</Paragraph>
    <Paragraph position="1"> Developments along, the many diverse paths indicated under that heading are likely to be rapid in the future as practical &amp;quot;understanding&amp;quot; of subsets of language becomes part of computer technology Far another view of the evolution of that process, see [57]. American Journal of Complltational Linguistics</Paragraph>
  </Section>
  <Section position="12" start_page="0" end_page="0" type="metho">
    <SectionTitle>
CURRENT BIBLIOGRAPHY
</SectionTitle>
    <Paragraph position="0"> The selection of material through the current second yeas of AJCL's existence remains tentative. A survey of subscribesmembers will be included in the last packet mailed during 1975 to establish patterns of coverage for fdture years.</Paragraph>
    <Paragraph position="1"> Categorization Bf entries deepens as the field defines itself and the collection of literature against which new items can.</Paragraph>
    <Paragraph position="2"> be matched increases. The advice of members is welcome.</Paragraph>
    <Paragraph position="3"> Many summaries are authors' abstracts, sometimes edited for clarity, brevity, or completeness. Where possible, an informative summary i.s provided.</Paragraph>
    <Paragraph position="4"> The Linguistic Documentation Centre of the University of Ottawa provides many entries; by editorial accident, some of the entries recently received from that source remain to be included in the next issue. AJCL gratefully adknowledges the assistance of Brian Harris and his colleagues.</Paragraph>
    <Paragraph position="5"> Some entries are reprinted with permission from Computer Abstracts.</Paragraph>
    <Paragraph position="6"> See the following frames for a kist of subject headings and items with extended presentation or review.</Paragraph>
  </Section>
  <Section position="13" start_page="0" end_page="77" type="metho">
    <SectionTitle>
SUBJECT HEADINGS
</SectionTitle>
    <Paragraph position="0"> ON VERBAL FRAMES IN FUNCTIONAL GENERATIVE DESCRIPTION ................... PARTI. J.Panevova 3 STELLUNG UND AUFGABEN DER UEBRAISCHEN LTNGUISTIK I (EINFUHRWGSSTUDIE). P. Sgall ............. 41 REVIEWS ALGEBRAIC LINGUISTICS IN SOME FRENCH SPEAKING COUNTRIES (S. Machova) .................... 5 3 METODIKA PODGOTOVKI INF~RMATSIONMYKH TEZAURUSOV PEREV s VENGERSKOGO POD RED I PREDISLOVIEM JU. A. SHREJDERA V SB. PERSVODOV &amp;quot;NAUCHNO-TEKHNICHESWA INFORMATSIJA&amp;quot; VY;P 17, 1971 (T. Ja. Kazavchinskaja4 ............. 74 FORMAL LOGIC AND LXNGUISTICS, Mouton, The Hague, 1972 . ............... (0. Prochazka) E. Zierer ,. 74 AUTOMATIC ANALYSIS OF DUTCH COMPOUND WOKDS, Amsterdam 1972 W. A. Verloren van Themaat; EXERCISES IN COMPUTATIONAL LINGUISTICS, Amsterdam 1970, I+. Brandt Cars-tius (M. Platek, L Vomacka) ...... ............A</Paragraph>
  </Section>
  <Section position="14" start_page="77" end_page="77" type="metho">
    <SectionTitle>
TABLE OF CONTENTS
</SectionTitle>
    <Paragraph position="0"> A STATISTLC STUDY OF NAMES IN TAMIL INSCRIPTIONS ....... Noboru Karashima and Y. Sabbarayalu 3 IMPLICATIONS OF ANCIENT CHINESE WITROFLEY ENDINGS Mantaro 5. Hashimoto .............. 17 THE SINO-KOREAN READING OF KENG-SLIE RIMES MantaroJ,Hashimoto. .......'....... 25 &amp;quot;TO&amp;quot;, &amp;quot;YUAN&amp;quot; AND &amp;quot;TE&amp;quot;-.. .A COMPARISON WITH JAPANESE ............... Masayuki Nakagawa 31 LARYNGEAL GESTUMS AND THE ACOUSTIC CHARACTERISTICS IN HINDI STOPS--PRELIMINARY REPORT.</Paragraph>
    <Paragraph position="1"> ......... Ryonei Kagaga and Hajime Hirose 47 General William A. Woods Bolt Beranek and Newman Inc.</Paragraph>
    <Paragraph position="2"> Cambridge, Mass 02138 Report No. BBN 3067 April 1975  Acquaints speech researchers in the state of the art in the conceptual development of, and the new perspectives they place on, parsing, syntax and semantic interpretation. Includes the Chomsky hierarchy of grammar models, n0.n-determinism in parsing and its implemen&amp;ation in either backtrdcking or multiple in4ependent alternatives, predictive vs. non-predictive parsing, word lattices and chart parsing, Early's algorithm, transition network grammars, transfornational grammars and augmented transition networks, procedural semantics, selectional restrictions and semantic  In: R. Schank ahd B.L Nash-Webber, eds., Theoretical Issues in Natural Language Processing, 1975, 126-129; Process models are rigorous, process specifications are made very explicit, and complexity ts handled by use of computera. A methodology should be reliable, efficient and have integrative power. The distinctive strengths of the currenl computer oriented methodology are (a) the complexity of ,data and tzheory is easy to accommodate, (b) time sequence and dependencies are preserved, and (c) a diversity of hypotheses can be tested.</Paragraph>
    <Paragraph position="3"> Weaknesses are (a) experbents often take years to perform, (b) the activity is treated as a programming exercise with the status of data and program unclearly defined and (c) in attempting to be general on a particular phenomenon, significant others are missed.</Paragraph>
    <Paragraph position="4"> As whole systems are produced, they are difficult to disseminate and jud e. A systemmay process its examples, but ir is hard to determine i it is ad-hoc and tuned to the examples.</Paragraph>
    <Paragraph position="6"> Bolt Beranek and Newrnan, Inc Cambridge, Mass In: R. Schank and B.L. Nash-Webbar eds. Theoreticel fssues in Natural Language Processing, 1975, 134-139.</Paragraph>
    <Paragraph position="7"> There are two tasks for which methodologies are used, (a) building intelligent: machines, and (b) under standing human language performance, Both depend on the development of a 'device-independent ' language, understanding theory. Fof theoretical studies, a methodology should be cognitively efficient and should deal effectively with the problem of scale--having a large number of facts embodied in the theory. Studies should be performed in the context of total language understanding; isolation of qomponents limits scope. Intuition on human language performance is a good guide to computational linguistics.</Paragraph>
    <Paragraph position="8">  Contains 142 abstracts covering recognition, synthesis, and the acoustical, phonological and linguistic processes necessary in conversion of various waveforms. Retrieved using the National  considers characters as a directed abstract graph, of which the node set consists of tips, corners, and junctions, and the branch set ~onsists of line segments connecting pairs of adjacent nodes. Classificat-ion of branch fypes produces features which are treated as fuzzy variables. A character is represented by a fuzzly function which relates its fuzzy variables, and by the node pair involved in each fuzzy variable. After producing a representation of an unknown character recognition occurs when a previously learned character's representation is isomorphic to the unknown.</Paragraph>
    <Paragraph position="9">  A 16:l compaction ratio was achieved by storing only the first instance of each pattern class and thereafter substituting this exemplar for every subsequent occurrence of the symbol. Proposed are refinements to yield a 40:l ratio.</Paragraph>
    <Paragraph position="10">  Various proposals are discussed, principally (1) Rankin, who has a two-level grammar, the first gives the strokes and rules for combination and the second explicates the order, with a recursive definition of subframes .</Paragraph>
    <Paragraph position="11"> (2) Fuj imara has an inventory of strokes and operators.</Paragraph>
    <Paragraph position="12"> For each stroke 3 functional points are isolated and operators define the linking by reference to these points. Applications include keyboard input, storage and retrieval or characters, and automatic recognition.</Paragraph>
    <Paragraph position="13"> There are two different approaches.</Paragraph>
    <Paragraph position="14"> One seeks a logically efficient system; the other one that seems natural to a user of the language.  An approach to recognition of a block picture by comparing it with stochastic sectionalgrams obtained by grouping many samples. TO calculate the risk, the absolute values of the differences between the stroke-occurrence probabilities of corresponding quanta in the two sectionalgrams are summed one of these two sectionalgrams being derived from the input pattern and the other from the prototype pattern. The smaller the sum of these differences is, the more accurate the input pattern tecognition.</Paragraph>
    <Paragraph position="15"> Wrf ting. : Recognition</Paragraph>
  </Section>
  <Section position="15" start_page="77" end_page="77" type="metho">
    <SectionTitle>
COMPUTER IDENTIFICATION OF CONSTRAINED HAND PRINTED CHARACTERS
WITH A HIGH RECOGNITION RATE
</SectionTitle>
    <Paragraph position="0"> W. C. Lin and T. L, Scully case western Reserve University Cleveland, Ohio IEEE Transactions on Systems, Man end Cybernetics, SMC-4, 497-504, 1974 Hand printed on a standardizing grid made of twenty line segments, yielding twenty features, and input using a television camera, 49 character classes were recognized at a greater than 99.4% rate. Feature values calculated utilizing a Gaussian point-to-line distance concept were used in a weighted minimum distance classifier. All character-dependent data are obtained through training techniques Both statistical linear regression and averaging methods are used to obtain the parameters defining each character class in feature space.</Paragraph>
    <Paragraph position="1"> Lexicoqraphy - Lexicology. : Dictionary Joseph D. Becker Zn: R. Schank and B.5. flash-Webber, eds., Theoretical Issui?s in Natural Language Processing, 1975, 611-63.</Paragraph>
    <Paragraph position="2"> We speak mostly by conjoining remembered phrases. Productive processes have secondary roles of adapting old phrases to new situations and of gap filling.</Paragraph>
    <Paragraph position="3">  In R. Schank and B.L. Nash-Webber, was., Theoretical Issues in Natural Language Processing, 1-5, 1975.</Paragraph>
    <Paragraph position="4"> Augmented phrase structure grammars consist of phrase structure rules with embedded conditions and structure building actions Data structures are records consisting of attribute-value pairs. Records can be actions, words, verb phrases, etc. There are three kinds of attributes: relations, whose value is a pointer to other records; properties, with values either numbers or character strings; and indicators, whose values have a role similar to linguistic featires. Structure building rules have a left part indicating the contiguous segments that must be present for a structure building operation, given in a right part, to apply.</Paragraph>
    <Paragraph position="5">  In R. Schank and B.L. Nash-Webber, eds., Theoretical Issues in Natural Language Processing, 1975, 6-14.</Paragraph>
    <Paragraph position="6"> The hypothesis is that every language user knows as part of his recognitAon grammar, a set of highly specific diagnostics that he uses to decide deterministically what structure to build next at each point in the process of parsin a sentepce. This theory rek jects 'backup as a standard contro mechamsm for parsing. A grammar is a set of modules. The parser works on two levels, a group level and a clause level. Group level modules work on a word buffer and build group level structures. Modules have a pattern, a pretest procedure and a body to be executed if the pattern matches and the pretest succeeds. If the parser fails, it keeps the structure constructed to date, and makes whatever substructures it can from the remaining part.</Paragraph>
    <Paragraph position="7">  A morphological analyzer is written in PL/~ using a recursive macro actuated generator. Called with a word as argument it returns a skem, part of speech, possible transformations, and semantic information. null</Paragraph>
    <Section position="1" start_page="77" end_page="77" type="sub_section">
      <SectionTitle>
Semantics - Discourse
Brian Phillips
Department of Information Engineering
</SectionTitle>
      <Paragraph position="0"> University of Illinois at Chicago Circle Doctoral dissertation, State University of New York, Buffalo, 1975 A theory for the structure of discourse is developed. It is sh~wn that proposi~ions of a coherent discourse must be logically cbnnected and exhibit a hierarchic thematic structure that has a single root. An eqample of a logical connective is 'Cause'; a theme is a generalized pattern that is associated with a single word, e.g., 'poison' is describable as 'Someone ingests something that causes him to become ill'. A theme applies to a discourse if 2ts definiens matches part of the discourse. The topic of a coherent discourse is its matrix theme; an illformed discourse has no topic.</Paragraph>
      <Paragraph position="1"> Not all discourse structure is expressed. If omitted, it must be inferrable. The process of inference requires a store of world knowledge - encyclopedic knowledge. An encyclopedia is described that contains all the devices reqursed by the discourse analysis problem. In fact, the encyclopedia is s general model for human cognition and is applicable to inany diverse cognStive tasks. The encyclopedia is a directed graph. Categories of nodes agd arcs, and of processes, are presented in detail  Discusses some of the problems that arise when the concept of a linguistic vatiable is combined with the concept of a fuzzy set: the range of the numerical base variable, in ordering usagR, is not fixed for a given linguistic variable.</Paragraph>
      <Paragraph position="2"> Does not explain the computation of values of compound expressions from the values of their components.</Paragraph>
      <Paragraph position="3"> Not all adjectives can be related LO an underlying numerical base.</Paragraph>
      <Paragraph position="4"> Other features involved in a complete analysis are: average value, typical value, observed value, standard deviation of values and polarity.</Paragraph>
      <Paragraph position="5"> [Dl'stribution limited prior to publication 6 ]  For each analysis in the semantic analysis file the author's theoretical orientation, his assumptions, and his notational conventions are entered on this file.</Paragraph>
      <Paragraph position="6"> The data fields are: idenrifyhg number, document source, related sources, words analyzed, conventions, theoretlcal basis including - acknowledgements, assumptions, stated purpose, and limits, a SOLAR critique, and the name of the person responsible for the entry.</Paragraph>
      <Paragraph position="7"> This file is available via on-line queries or in a listing format.</Paragraph>
      <Paragraph position="8"> The file can be searched using the identifying- number on document source fields.</Paragraph>
      <Paragraph position="9"> Other fields can be searched using a string-matching facility.</Paragraph>
      <Paragraph position="10">  including q~alificatfons, informal explanations, and criticisms of descriptions. The wards used are found in the lexicons of the Speech Understanding Research groups being sponsored by ARPA. The semantic analysis produces 23 data fields for each word, of which the following are searchable: word, domain analysis number, source part of speech and components Other fields can be searched using a string matchin facility. This file is available via on-line f queries or in a isting format.</Paragraph>
      <Paragraph position="11">  This file provides the citations to the documents raerenced in other SOLAR files. Thirty data tFelds are used, of which Che following are searchable: author, year, index term, document type, subject ID, document number, and Bell ID, Other fields can be searched using a string-matching facility. This file available via on-line queries or in a listing format including an mthor. keyword and 'sequence number index.</Paragraph>
      <Paragraph position="12">  If semantic primitives are seen as essentially djtfferent from words, this leads to attempts to justify them directly, usually psychological~y. Otherwise the justification is merely that they work. Primitives can be taken as a small natural language, with no essential difference betQeen primitives and mrde. But the set of primitives cannot be extended indefinitely. ptherwise the distinction between the representation end the nntural language will be lost. If it is not possible to escape frw natural language into another realm, one cannot separate semantic representation from reasoning as is attempted. It is probably more sensible to say that natural. language understanding depends on reasoning rather than vice-versa.</Paragraph>
      <Paragraph position="13">  In: R. Schank and B.L. Nash-Webb-, eds., Theoretical Zssues in Natural Language Processing, 1975, 84-88.</Paragraph>
      <Paragraph position="14"> A key feature of the system is that the semantic deep structure of the non-verbal, behavioral, rules may be represented in the same network as the semantics for natural language grammars, and, as a consequence, provide non-verbal context for linguistic rules. The total system has the power of at least the 2nd Order predicate calculus.</Paragraph>
      <Paragraph position="15">  Canonical reprebentations of cqnceptualisations are composed of an ACTOR, an ACTION and a set of ACTION dependent cases. The 12 primitive actions are ATRANS, transfer of possession; PTRANS, transfer of physical location; MTRANS, transfer of information; PROPEL, application of physical force; M13UILD construction of new conteptual information; INGEST, taking in of an object by an animal; GRASP, to grasp; ATTEND, to focus sense organ on an object; SPEAK, to make a noise; MOVE, to move a b.ody part; &amp;WEL, to push something out of the body; and PLAN, which characterizes the ability to form a course of action that leads to a goal.</Paragraph>
    </Section>
    <Section position="2" start_page="77" end_page="77" type="sub_section">
      <SectionTitle>
Semantics - Discourse
George A. Miller
</SectionTitle>
      <Paragraph position="0"> In R. Schank and B.L. Nash-Webber, eds., Theoretical Issues in Natural mguage Processing, 1975, 30-33.</Paragraph>
      <Paragraph position="1"> An analysis of the verb 'hand' is paraphrased as: 'S had Y prior to some the t at which X used his band to do something that caused Y to travel to 2, after which Z had Y' The analysis includes a dYscussion of the subsumed Concepts HAPPEN, USE, ACT, CAUSE, ALMW, BEFORE, TRAVEL, and AT.</Paragraph>
      <Paragraph position="2">  positional interpretation to possessional and identificational interprecations. Two kinds of muse are distinguished, CAUSATIVE and PERMISSIVE. Inference rules based on the form of semantic representations derive logical entailments. e.g. CAUSE, (X,E)-- E.  Semantics - Piscourse . Comprehension Christopher K. RIesbeck IA R. Schank: ,and. B;L; Nash-webber, eds., Theoretical Issues in Natural Language Processing, 1975, 11-16.</Paragraph>
      <Paragraph position="3">  Comprehension is a memory process; breaking computational under standing into sccbyrob-leem af parsirig and semantic iktetpretation has hindered progress with much effort wasted on the construction of parsers. A system is described in which a monitor takes words from a sentence one at a time, from left to right. From a lexicon expectatisns. of the word (or its root) are added to a master list of expectations. If an element of the master list evaluates m true', programs associated with the element are executed. The final structure built by the triggered expectations is the meaning ~f the sentence.</Paragraph>
      <Paragraph position="4">  visual imagery. In the latter situation, do people include acts and objects not present in a given story, but necessary to carry out the simulation. This has not yet been experimentally tested. Experiments have shown that a listener may simulate a story from the point of view of an observer or of a participaht in the story. One problem that this raises for AI, if a program can construct an interconnected structure from the text, is the non-uniqueness of this meaning representation, Another problem is that programs should not be designed to preserve all details, but then, what should be forgotten; point of view may be useful here  Listeners draw inferences from what they hear, but different listeners can make different inferences. One kind of inference in comprehension is in the context of given-new information: the speaker tries to construct the given and new information of each utterance, so that the listener is able to compute unique antecedents for the given information, and so that he will not already have the new information attached to the antecedent. Inference mechanisms include direct reference, identity, pronominalization, epithets, set membership, indirect reference by association, indirect reference by characterization, reaaons, causes, conseqQences, and concurrences. Bridging inferences need not be determinate, but in discourse they seemingly are, and further, are the inferences with fewest assumptions. Both backward and forward inferences are possible, but only the former are determinate.</Paragraph>
      <Paragraph position="5">  The Systematized Nomenclature of Pathology (SNOP), in use at NIH, consists of about 15,000 entries in four lists: topography, morphology, etiology, and function. Only a few binary relations on terms are needed; e.g., location of morphology, (lesion) at topography (body site). Numerous relations on the primary relational triples evidently have to be defined.</Paragraph>
      <Paragraph position="6">  In: R. Schank and B.L. Nash-webber, eds., Theoretical Issues ih iVatural. tanguage Processing, 1975, 64-67.</Paragraph>
      <Paragraph position="7"> Generation is a two stage process. The first formulates a lan and the second expresses these intentions; there is feedback getween the stages. Intentions can be encoded by (i) establishing presuppositions, (ii) by linguistic conventions, and (iii) by discourse structure. kSoc*al Actioii Paradigm is a model of the flow of social actions.</Paragraph>
      <Paragraph position="8">  In generating natural language from a conceptual structure words and syntactic structure must be deduced from the information content of the message. Words are accounted for by a pattern matching mechanism, a discrimination net. The case framework of verbs is one source of knowledge for choice of syntactic structure.  In therapeutic discourse the subject is not so much generating discourse as regulating it. Statements are made, retracted, qualified and restated. The ERMA model simulates this. It has five stages, represented as CONNIVER contexts. The discourse stream has its source in a special program and then flows back and forth between the contexts before achieving its final expression. Each context determines suitability for expression; whether it should be censored or passed on with suggestions for modification. Concepts are represented by means similar to Minsky's frames.</Paragraph>
      <Paragraph position="9">  Technological computational linguistics is primarily concerned with software technology whereby computers can use and process natural language. Descriptive computational linguistics uses the computer a6 a means of developing an accurate and empirically valid model of linguistic and cognitive behaviors of human speakers. There is no inherent representation of intentions in the former, and experience is that it cannot easlly be generalized to.the latter. One problem of modeling is that important things are often hidden by their familiarity.</Paragraph>
      <Paragraph position="10">  Both propositional and non-propositional knowledge must exist. Interpretive processes during perception individuate and categorize objects. If an object cannot: be categorized then the object will be stored with analogic information. During verbalization analogic images will be compared with available category prototypes to decide on the best match for use in the utterance.</Paragraph>
      <Paragraph position="11">  1. ~imensions of representation Daniel G. Bobrow ... 1 2. What's in a link: Foundations for semantic networks William A. Woods .................. 35 3. Reflections on the formal description of behavior Joseph D. Becker .................. 83 4. Systematic understanding: Synthesis, Analysis, and contingent knowledge in specialized understanding systems Robert J. Babrow and John ~eely Brown ... 103 11 !~EW MEMORY MODELS 5. Some principles of memory schemata Daniel G. ~obrow and Donaldd. Norman ................ 13i 6. A frame for frames: representing knowledge for recognition Benjamin J. Kuipers .......... 151 Semantics - Discourse : Memory 7. Frame representations and the declarative-procedural ............ contrOVersy Terry Winograd 185 I 11, HIGHER LEVEL STRUCTURES 8. Notes on a schema for stories David E. Rumelhart . 211 9. Tho structure of episodes in memory Roger C. schank-237 la. Concepts for representing mundane reality in plans Robert P. Abelson ................. 273 IV, SEMANTI c KNOWLEDGE IN UNDERSTANDER SYSTEMS 11. Multiple representations of knawledge for tutorial reasoning John Seely Brown and Richard R. Burton . 311 12. The role of semantics in automatic! speech understanding Bonnie Nash-Webber ............... 351 13. Reasoning from incomplete knowledge Allan Co:lins,  many years he worked with 11s on problems in Artificial Intelligence, especially on the developrnerit of RII intelligent instructionnl system. Jainle directed the Artificinl Intelligence group et Bolt, Deranek, and Ncwman (in Cambridge, Massnchusetts) until his death in 1973. Some of us who hod worlred with Jaime decided to hold a conference in his nlemory, a confcrerlce whose guiding principle wonld be that Jaime would have enjoyed it. This book is the result of that conferencb.</Paragraph>
      <Paragraph position="12"> Jain~e Carbonell's isnportant contribution to cognitive science is best sumnlarized in the title of one of his publicatio~ls: A7 irt CAI. Jaime wanted to put principles oE</Paragraph>
    </Section>
    <Section position="3" start_page="77" end_page="77" type="sub_section">
      <SectionTitle>
Aritificial Intelligence into Computer-Assisted Instruction
</SectionTitle>
      <Paragraph position="0"> (CAI) systems. He dreamed of a system which had a data base of knowledge about a topic matter and general information about language and the principles of tutorial instruction. The system could then pursue a natural tutorial dialog with a student; sometimes following the student's initiative, sometimes taking its own intiative, but always generating its statements rind responses in a natural way from its general knowledge. This system contrasts sharply with existing systems for Computer-Assisted Instruction in which a relatively fixed sequence of questions and possible reponses have to be determined for each topic.</Paragraph>
      <Paragraph position="1"> Jaime did construct working versions of his &amp;ream--in a system which he called SCHOLAR. But he died befoi-e SCHOLAR reached the full realization of the dream.</Paragraph>
      <Paragraph position="2"> It was a pleasure to work with Jaime. His kindness and his enthusiasm were infectious, and the discussions we had with him over the years were a great stimulus to our own thinking. Both as a friend and a colleague we miss him greatly.</Paragraph>
      <Paragraph position="3"> Cognitive Science. This book contains studies in a new field we call cognitive science. Cognitive science includes elements of psychology, computer science, linguistics, philosophy, and education, but it is more than the intersection of these disciplines. Their integration has produced a new set of tools for dealing with a broad range</Paragraph>
    </Section>
  </Section>
  <Section position="16" start_page="77" end_page="77" type="metho">
    <SectionTitle>
REPRESENTATION AND UNDERSTANDING
</SectionTitle>
    <Paragraph position="0"> of questions. In recent years, the interdctions among the workers in these fields has led to exciting new developments in our undemtanding of intelligent systems and the development of a science of cognition. The group of workers has pursued problem$ that did not appear to be solvable from within any single discipline. It is too early to predict the future course of this new interaction, but the work to date has been stimulating and inspiring. It is our hope that this book can serve as an illustrntion of the type of problems Chat can be qq~roached through interdisciplinary cooperation. The participants in this book (and at the conference) represent the fields sf Artificial Intelligence, Linguistics, and Psychology, all of whom work on similar problems but with different viewpoints. The book focuses on the common problems, hopefully acting as a way of bringing these issues to the attention of all workers in those fields related to cognitive science.</Paragraph>
    <Paragraph position="1"> Subject Matter. The book contains four sections. In the first section, Theory of Representotion, general issues involved in building representations of knowledge are explored. Daniiel G. Bobrow proposes that solutions to a set of design issues be used as dimensions for comparing different represetltations, and he examines different forms such solutions might take. William A. Woods explores problems in representing natural-language statements in semantic networks, illustrating difficult theoretical issues by examples. Joseph D. Becker is concerned with the representation one can infer for behavioral systems whose internal workings cap not be observed directly, and he considers the interconnection of useful concepts such as hierarchical organization, system gaals, and resource conflicts Robert J. Bobrow and John Seely Brown present a model for an expert understander which can take a collection of data &amp;scribing some situation, synthesize a contingent knoultedge structure which places the input drrta in the context of a larger structural organization, and which answers questions about the situation based only on the contingent knowledge structure.</Paragraph>
    <Paragraph position="2"> Section two, New Memory Models, discusses the implications of the assumption that input information is always interpreted in terms of large structural units derived</Paragraph>
  </Section>
  <Section position="17" start_page="77" end_page="77" type="metho">
    <SectionTitle>
REPFLESENTATION AND UNDERSTANDING
</SectionTitle>
    <Paragraph position="0"> from esporionce. Daniel G. Bobrow and Donnld A. Norman postulate active sciicrnnla in memory which r~fer to each other through use of car~tuxl-deper~derlt descriptions, and whir11 respo~irl both to input data and to hypothcs~s about structure. Benjamin J. Kuipers describes thc conc~l)t of a frame us a structural organizing unit for data ele~nents, nnd he discusses the use of these units in the context of n recognition system. Terry Winogrod explores issues i~lvolved in the controversy on rcl~rcsc~lting knowledge iu declarative vcrsus procedural form. Winogr~tl uses the concept of a frame ns a basis for the synthesis of the declarative and proccdural tipproaches. The frame provides an organizing structure on which to attach both declarative and procedural informhtion.</Paragraph>
    <Paragraph position="1"> The third section, lligher Level Structures, focuses on the representation of plans, episodes, and stories within memory. David E. Rumelhart proposes a grammar for well-formed stories. 'His summarization rules for stories based on this grammar seem to provide reasonable predictions of human behavior. Roger C. Schank postulates that in understanding paragraphs, the reader fills in causal connections between propositions, and that such causally linked chains are the basis for most human memory organization. Robert P Abelson defines a notation in which to describe the intended effects of plans, and to express the conditions necessary for achieving desired states. The fourth section, Semantic Knowledge in Understander Systems, describes how knowledge has been used in existing systems. John Seely.Brown and Richard R.</Paragraph>
    <Paragraph position="2"> Burton describe a system which uses multiple representations to achieve expertise in teachiog a student about debugging electronic circuits. Bonnie Nash-Webber describes the role played by semantics in the understanding of continuous speech in a limited domain of discourse.</Paragraph>
    <Paragraph position="3"> Allan Collins, Eleanor H. Warnock, Nelleke Aiello, and Mark L. Viller describe a continuation of work on Jaime Carbonell's SCHOLAR system. They examine how liumans use strategies to find reasonable answers to questions for which they do net have the knowledge to answer with certainty, and how people can be taught to reason this way.</Paragraph>
    <Paragraph position="4"> iii</Paragraph>
  </Section>
  <Section position="18" start_page="77" end_page="1972" type="metho">
    <SectionTitle>
REPRESENTATION AND UNDERSTANDING
</SectionTitle>
    <Paragraph position="0"> Acknowledgments. We are grateful for the help of o largo number of people who made the conference and this book possible. The conference partici.pants, not all of whom are represented in this book, created an atmosphere in which interdisciplinary exploration became a joy. The people attending were: From Bolt Beranek and Newman--Joe Backer, Rusty Bobrow. John Brown. Allan Collins. Bill Merriam. Bonnie -. ash-~ebber, lea no; Warnock, and-Rill Woods.</Paragraph>
    <Paragraph position="1">  making it a comfortable atmosphere in which to discuss some very difficult tecllnical issues. Carol Van Jepmond was responsible for typing, editing, and formatting the manuscripts to meet the specifications of the pysterns used in the production of this book. It is thanks to her skill and effort that the book looks as beautiful as it does. June Stein did the final copy editing, made general corrections, and gave many valuable suggestions on format and layout.</Paragraph>
    <Paragraph position="2"> Photo-ready copy was produced with the aid of experimentd formatting, illustration, and printing systems built at the Xerox Palo Alto Research Center. We would like to thank Matt Heile-r, Ron Kaplan, Ben Kuipers, William Newman, Ron Rider, Bob Sproull, and Larry Tesler for their help in making photo-ready production of this book possible. We are grateful to the Computer Science Laboratory of the Xerox Palo Alto Research Center for making available the experimental facilities and for its continuing support.</Paragraph>
    <Paragraph position="3">  In R. Schank, and B.L. Mash-Webber, eds., Theoretical Issues in Natural Language Processing, 1975, 62-51.</Paragraph>
    <Paragraph position="4"> Frames are static structures about one stereotyped topic Each frame has many statements about the topic, each expressed in a suitable semantic representation. The primary goal in understanding is to find instances of frame statements in the discourse Questions about a source statement can be answered by reference to the frame of which it is en instance.</Paragraph>
    <Paragraph position="5">  By systematic application of a cognitive network or similar theory of knowledge the internal structure of a (medical) code can be improved and tools developed for different purposes. Hays's theory uses paradigmatic, syntagmatic, discursive, attitudinal, and metalingual (MTL) arcs. The MTL arcs shift level of abstraction; e.g., anemia is neither a fewness nor an erythrocyte but an abstract condition. An abstract definition can include several syntagmatic propositions, linked discursively. A medical term can be linked by MTL to definitions in different languages (clinical, ~athophysiological, etc.)  A data structure scheme for creating structured concept nodes tn a semantic network is presented, with structuring techniques based op a set of primitive link types including: defined as attribute part, modality, role, structural/condition, valuelrestriction, subconcept and superconcept. This structure will atore descriptions of bibliographic references in a way that will facilitate the important processes of inference, paraphrase and analogy.  Tulving's episodic memory is seen as a record of experiences and their context. However, both episodic and semantic memories must have similar power of representation, so their structures are not disttnguishable. Similarly, a lexical memory must have the power to represent propositional information about words. Thus, the fabric of knowledge is merely cut into different shapes.  In: R. Schank and B.L. Nash-Webbet, eds., Theoretical Issues in Natural Language Processing, 1975, 79-83.</Paragraph>
    <Paragraph position="6"> A uniform formal structure for the interpretation of events, initiation of actions, understanding language, and using language The components of the system are CONTROL --the procedura is component; SCHEMATA --a lattice whose points are lexical decompoeitions; LEXICON --non-definitional infomation; BELIEFS --a closed and consistent set of statements in a predicate calculus; and GOALS.</Paragraph>
    <Section position="1" start_page="77" end_page="77" type="sub_section">
      <SectionTitle>
Semantics - Discourse : pemory
Andrew Ortony
University of Illinois at Urbana-Champaign
</SectionTitle>
      <Paragraph position="0"> In: R. Sclaank and B.L. dlash-Webber, Bds., Theoretical Issues in Natural Language Processing, 1975, 55-60.</Paragraph>
      <Paragraph position="1"> The distinction between semantic and episodic memory is not so much one between different kinds of memo , but one between dif'K ferent kinds of knowledge. The dbtinction as been rejected, because it is said that since we know everything Erom experience, there is no room for the distinction. The error lies in confusing knowledge from knowledge, and knowledge of knowledge. Semantic knowledge is knowledge that has been reorganized around concepts from knowledge originally encoded around events; it fs stripped of personal experience. One question raised by the distinction is how does information get into semantic memory, and how and when does it get 10s t from episodic memory.</Paragraph>
    </Section>
    <Section position="2" start_page="77" end_page="77" type="sub_section">
      <SectionTitle>
University of Rochester
New York
</SectionTitle>
      <Paragraph position="0"> In: R. Schank and B.L. Nash-Webber, eds., Theoretical Issues in Natural knguage Processing, 1975, 92-93.</Paragraph>
      <Paragraph position="1"> There is evidence that people use three-dimensional models and that they integrate several views into a single model. This is counter to the claim that we symbolically store a large number of separate views. Another problem is with the assumption of default values for slots in frames. In the extreme, this gives visual perception without vision. The evidence is that people can understand totally unexpected images presented for quite short pexiods. A rhird point concerns the telatively static nature of frames. A better model is to condtruct a goal oriented subsystem making use of context specific knowledge.</Paragraph>
      <Paragraph position="2">  Stories are broken down into schemata, e.g., plot plus moral.</Paragraph>
      <Paragraph position="3"> Questions about schemata are: what are the essential ingredients of a schema; are some more abstract thah others ; and how are they to be discovered--by imagination and intuition?  In: R.SchalJt and B.L. Nash-Webber, eds., Theoretical Issues in Natural Language Processing, 1975, 94-103.</Paragraph>
      <Paragraph position="4"> Stereotypes are actor vereions of frames. A stereotype has the following parts: a collection of characteristic objects, characteristic relations for these obj,ects and invocable plans for transforming the objects and relations.</Paragraph>
      <Paragraph position="5">  Frames are data 8 tructures for representing Stere~typed situations. Each frame contains information about how to use the frame, what to expect to happen next, and what to do if the expectations are not fulfilled. Lower levels of a frame have termfnals that can be filled by specific instances from source statements. Frames are linked together into a frame system and the action to go from one to another indicated.</Paragraph>
      <Paragraph position="6"> Different fraples can share the same terminals. Unfilled slots in instances of frames are filled by default optiorls from the general frame.</Paragraph>
      <Paragraph position="7">  Processing, 1975, 11 7-121.</Paragraph>
      <Paragraph position="8"> A SCRIPT is a structure consisting of slots and requirements on what can fill the slots. It is defined as a predetermined causal chain of conceptualizations that describe the normal sequence of things in a familiar situation. A SmPT header defines the  Potential engineering applications. Inference algorithms for finite-state and context-free grammars. Application of some of the algorithms to the inference of pattern grammars in syntactic pattern recognition illustrated by examples.</Paragraph>
      <Paragraph position="9">  The health-care community has a functional dialect, with subdialects fhr physicians, nurses, etc.</Paragraph>
      <Paragraph position="10"> Anthropological study of the naming behavior of the community is a suitable preliminary step in thesaurus building. It would determine what are terms to be entered, how they are related, and what theoretical differences require alternative definitions of the same term.</Paragraph>
      <Paragraph position="11">  Using for illustration a recognition system for chromosome structures, methods are developed which basically consist of applying error transformations to the productions of context-free grammars in order to generate new context-free grammars capable of describing not only the original error free patterns, but also patterns containing specific types of errors such as deleted, added, and interchanged syinbofs which often arise in the pattern-scanning process.  the study of natural language: (a) emphasis on complex stored structures, (b) emphasis pn the importance of real world knowledge, (c) em hasis on the communicative function of sentences in context, and (dy emphasis on the expression of rules, structure and information within the operational environment. The only test of a natural language system is its success on a task, any demand for more theory must bear this in mind. Neither can recent work in A1 be regarded as theoretical; it is the semi-formal expression of intuition. A1 is engineering, not a science, and as such there is no boundary to natural language; one counter example does not overthrow a rule system. Further, talk of theory distracts from heuristics.</Paragraph>
    </Section>
    <Section position="3" start_page="77" end_page="77" type="sub_section">
      <SectionTitle>
Depattment of Computer Science
University of BPitish Columbia
</SectionTitle>
      <Paragraph position="0"> In: R.. Schank bnd B.L. Nash-Webber, Eds., Theoretical Issues in Natural Language Processing, 1975, 175-179.</Paragraph>
      <Paragraph position="1"> There are two mechanisms for formal reasoning: (a) resolution pxinciple, a campetence model, b virtue of its completeness, and (b) natural deductive systems, w g ich are attempts to define a perforrnance model for logical reasoning. A system could be designed that interfaces the two systems, each doing what it does best.</Paragraph>
      <Paragraph position="2"> Natural deductive systems have not considered fuzzy kinds of reasonfng. Future questions concern other quanrifiers, concexrs for representing wanting, needing, etc., and the balance between computation and deduction.</Paragraph>
    </Section>
    <Section position="4" start_page="77" end_page="77" type="sub_section">
      <SectionTitle>
Department of Computer Science
University of Maryland
</SectionTitle>
      <Paragraph position="0"> In: A. Schank and B.L. Nash-Webber, Eds., Theoretical Issues In Natural Language Processing, 1975, 180-195.</Paragraph>
      <Paragraph position="1"> Commonsense algorithms are basic structures for modeling human cognition. The structure is defined by specifying a set of links which build up large structures of nodes of five types: Wants, Actions, States, Statechanges and Tendencies. There are 25 primitive links, e.g., one-shot causality, action concurrency, inducement Various applications are active problem solving, basis for conceptual representation of language, basis of self model, etc.</Paragraph>
    </Section>
    <Section position="5" start_page="77" end_page="77" type="sub_section">
      <SectionTitle>
Computation : Inference
Charles F. Schmidt
Rutgers University
</SectionTitle>
      <Paragraph position="0"> New Brunswick, New Jersey In: R. Schank and B.L. Nash-WebBer, Eds., Theoretical Issues in Na tufa1 Language Processing, 1975, 196-200.</Paragraph>
      <Paragraph position="1"> A model of reasoning about human action must include (1) how people arrive at a plan, (2) what can count as a reason for choosing to perform the plan, and (3) discovering plans and motivations from observation or linguistic report of actions. A plan is the internal representation or set of beliefs about how a particular goal may be achieved. The belief by an observer that an actor performed one act to enable a second to be performed can follow neither from deductive nor inductive reasoning. An observer may have other propositions that are reasons for believing or nut believing that a plan correctly characterizes the beliefs of the actor. An act name organizes a set of beliefs about how a move of this type might relate to other moves, and the cognitive and motivational statea of the actors.</Paragraph>
    </Section>
    <Section position="6" start_page="77" end_page="1972" type="sub_section">
      <SectionTitle>
Department of Computer and Informatian Science
The Moore School of Electrical Engineering
University of Pennsylvania
</SectionTitle>
      <Paragraph position="0"> Among popular computer progrdg languages, SNOBOL4 stands out as the only one offering complex pattern definition and mtching capabilities. It also has a flexible function definition facility and programmer-defined data types. While not unique, these two features encourage problem-dependent extensions of the language. . All ?bee aspects of SNOBOLA fom the basic tools in Griswold's new book Intended as a text for the SNC1BON user (it is not an &amp;quot;introductoryt' text), it presents techniques for the representation and manipulation of data in string, list, or otherwise &amp;quot;structured&amp;quot; fm. The text includes my programed examples, problems with a wide range of difficulty, and answers to my of these problems.</Paragraph>
      <Paragraph position="1"> The first Shree chapters develop pattern matching, function definition, and data s-tructures. The last four chapteFs examine particular application domiins: mthemtics, cryptography, document ~rpar-~or , Plus a few more specialized problems. Although this my seem tb ignore cqutational linguistics, the greatest imnediate benefit for the pmgranurter lies in the fFrst the chapters anyway.</Paragraph>
      <Paragraph position="2"> Within Chap 1, the section on gramws and patterns can be used for the inIplemntation of simple syntactic analysis. For exaqle, there is a straight-forward mpping of a BNf g~mmar into SNOBON patterns, but there are pitfalls (as well as scsne more efficient representations in the balance) that the programer ought to know.</Paragraph>
      <Paragraph position="3"> These are c'arefully explained.</Paragraph>
      <Paragraph position="4"> A topic that I felt was inadequately covered in Chapter 1 was the definition of the pattern mtching mechanism itself. The immediate presentation of examples using pattern rratchhg (page 2) calls for a brief overview of pattern mtchhg syntax and seman-tics. Surely a progrwnnw? muld appreciate not having to refer back to his in.troductory text should some patten function or construct bebhazy in his memory. Even an appendix wuld be satisfactory. In addition, this would support the section on ptterns as procedures by providing the underlying semantics for such tlprocedures. EUrther incentive for its inclusion is provided by. the excellent review of progt-annner-defined data types in Chapter 3. Why leave pattern mtching to the userJ.s recollection? The function definition facility discussed in Chapter 2 eraables the constructic of generic functions. Since there are no data type declarations for function argunents or parwters, often only one function is required for the execution of related op&amp;ations.on various data types. The proliferation of functions in a complex system might therefore be systemtically reduced., The burdeg falls on the progrmer, of course, to sort out +he admissible combinatians or appropriate actions. An addition function for real and cmplex numbers is discussed, where the fom is a SNOBOL4 primitive and the latter is constructed from pmgrmerdefined data types. Although not in the realm of cqutatiowl linguistics, it does have a parallel, for exmple, in a function whi&amp; inserts data into a semantic network and is expected to handle various chunks of netwmk as well as atomic data. The data type might only be determined. during prop execution; using a generic function avoids distracting logic within the user's primary function.</Paragraph>
      <Paragraph position="5"> The section on functions as generators is a little weak from the point of view of computational linguistic requirements fa procedures which generate successive alterna?ives frcmn a complex s-tructure, for example, sentence parsing or referent resolution.</Paragraph>
      <Paragraph position="6"> The use of sjmple global variables is tm lMted in these contexts; one often heeds to become involved with saving the values of ~e~eral lwal variables in special data blocks or stacking the decision pints associated with alternatives. The first is a well-known compiler-design technique, while he second involves a backbxicking control struc'hxe. In fact, an excellent illustration of these ideas would be an implementation of the SfJOBOL4 pattern matohing system in SNOBON.</Paragraph>
      <Paragraph position="7"> Chapter 3 is the mst useful because it desmibes how propammer-defined data types can be used to build &amp;quot;sh.u~tures~~: stacks, queues, lhked lists, binary trees, am3 trees. The skillful user of such representations will find a reduced role for complicated patterm mtchhg expressions because the implicit structure &amp;coded into a string becomes dfest in the explicit links of the shcture. Not only is there often an economic advantage, but the semmtics of SNOBOL4-are easier to use than the implicit itcMracking semantics of pattern mtching. (Grimold himself phts this out in the section on patterns as procedures. The prog~mner is encouraged to consider economic trade-offs in the hplemehtation of stcuchuws. Often overlooked questions are addressed: for example, the relative merits of implementing stacks using strings, arrays, tables, or defined data types. Programs for the use or traversal of structures are also provided.</Paragraph>
      <Paragraph position="8"> Although exercise 3.40 requests a representation for directed graphs, neither hint nor answer is pvided. The canputatid linguist having an hterest in smtic networks or similar associative structures is thus lef-t to his own expertise. ?he basic -tree representation mst be significantly dified to incorporate labelled edges, a nears of eaversal (search) thmqh the edge set, and, of course, non-txee structures.</Paragraph>
      <Paragraph position="9"> Gritwold apologizes for not covering every application, 6 8 but the generality and current popularity of networks for the representation of knowledge calls for expanded treatment of the topic.</Paragraph>
      <Paragraph position="10"> Among the appSications covered in detail, the ones most relevant to ccsnputational linguistics include a Mndom sentence generator (fm a pra~~~nar), an mcm processor, and (perhaps) a context editor. The input and output of textual mterial is covered in depth undw d6cument prepmtion (Chapter 6). Since the text does not delve into computational linguisfics pw se, the reader (or instructor) will dften be called upon to map techniques described in the text onto his own problem. I think that a gkd programer muld be able to perform this 'transformation since solutions are provided for my of the basic problems in handling input text, setting up data structures, and traversing these structmes .</Paragraph>
      <Paragraph position="11"> Before you begin programming your next computational linguistics project, a glance thr&amp;quot;ough this book my save you considerable programing time and reward you with usable and flexible data structures. Even if you do not program in SNOBOL4, the techniques presented here might guide you to more efficient usage of other languages. On the other hand, it might convince you to try SNOBOLU.</Paragraph>
      <Paragraph position="12">  Reviewed by Richard J. Miller St. Olaf College A practical guide for the occasional Fortran TV programmer to the basic &amp;quot;tricks&amp;quot; and vocabulary used by the systems programmers.</Paragraph>
      <Paragraph position="13"> his boo^ ranges over topics from plotting on a line printer to hashing and basic storage structures (stacks, queues, etc.) using a concise, to-the-point writing style. This style reinforces the stated intention of the book, which is to heLp a programmer with a problem by providing descriptions of non-mathematical techniques. The style and intention do limit the usefulness of this book, as some of the topics would be well known to advanced programmers and are not covered in sufficient depth for such a person. It is then the area between these two extr'emes to which this book is aimed, and there it can be of great service.</Paragraph>
      <Paragraph position="14"> The only important assumption made of the reader is that be know the variable types of Fortran (integer, real, Hollerith, etc.) and their attendant foymat specifications. A good knowledge of character formats is especially useful, although the major use for them is in output statements used in the examples given in the book. It is also assumed that the reader knows the basic Fortran statements, but this is simple matter as opposed to the format and variable type problems which confront a Fortran programmer.</Paragraph>
      <Paragraph position="15"> The bodk also includes several exercises at the end of each chapter (answers not supplied unfortunately) and a short but very complete bibliography which includes several sources for each chapter.</Paragraph>
      <Paragraph position="16"> The book's primary value is as a source for Hints to problems encountered during programming, providing an introduction to the'more sophisticated literature which can be found by starting with the bibliography. This book is therefore a starting point for picking up a basic vocabulary, techniques, and references for someone who has just completed a programming course or who needs a quick introduction to some technique which he may want to look at later in more detail.</Paragraph>
      <Paragraph position="17">  In: R. Schank and B.L. Nash-Webber, Ed$., Theoretical Issues in Natural language Processing, 1975, 146-150.</Paragraph>
      <Paragraph position="18"> A computer graphics metaphor is useful for human visual imagery. Analogous oroperties are found: as objects become smaller their constituent parts become more difficult to discern perceptually; as more parts are added to an image it becomes more degraded due to capacity limitations; image6 displaying more identifiable details take longer to construct; images cannot be indefinitely expanded before overflowing; and the existence of decay time for an image which affects the time taken to construct a riep Image.</Paragraph>
    </Section>
    <Section position="7" start_page="1972" end_page="1972" type="sub_section">
      <SectionTitle>
Department of Psychology
University of Western Ontario
London, Canada
</SectionTitle>
      <Paragraph position="0"> In: R. SchalUr and B.L. Nash-Webber, Eds., Theoretical Issues in Natural Language Processing, 1975, 160-163.</Paragraph>
      <Paragraph position="1"> Semantic structure is relative to the process that constructs and uses the representatton. By positing analogue representations it is suggested that a process does noc need to know the rules of transformation, e.g., rotation, but this is impossible unless the analogical modelling medium intrinsically follows khe laws of physics, i.e., ascribing these laws to brain tissue.</Paragraph>
    </Section>
  </Section>
  <Section position="19" start_page="1972" end_page="1972" type="metho">
    <SectionTitle>
ANALOG/PROPOS ITIONAL CONTROVERSY
</SectionTitle>
    <Paragraph position="0"/>
    <Section position="1" start_page="1972" end_page="1972" type="sub_section">
      <SectionTitle>
Berkeley
</SectionTitle>
      <Paragraph position="0"> In: R. Schank and B.L. Nash-Webber, Eds., Theoretical Issues ia Natural Language ProcPssi ng, 1975, 151-159-Sensory data is considered as having several levels of interpretation. At the sensory end, the representation is analog, and propositional at the cognitive end. Analog images are incorrectly seen as having all details of the stimulus whereas quasi-linguistic representations are only partial.</Paragraph>
      <Paragraph position="1"> The important issue is not the partiality but the selection, possibly information that discriminates the object&amp;n_c~texJ, Fokstru_c_ral information there needs to be a mechanism for both parts ana wholes.</Paragraph>
      <Paragraph position="2"> Parametric information can be coded componentially and explicitly, but some seems to function integrally. It is claimed that structural perception is qualitative whereas parametric perception is quantitative, but structural elements may have quantitative aspects--its strength of association with different groups. Although both structure and parameters are encoded relative to other information, thexe is evidence of preferred orientation and perspectives for parameters  In: R. Schank and B.L. Nash-Webber, Eds., Theoretical Issuesoin Natural Language Processing, 1975, 164-168.</Paragraph>
      <Paragraph position="3"> The distinction between Fregean (symbolic) and analogical representations is that in the latter both representation and thing must be complex and there must be correspondence between the structures, whereas in the farmer case there is no need for a correspondence.</Paragraph>
      <Paragraph position="4"> Attempts to subsume either representation under the other have not succeeded.</Paragraph>
      <Paragraph position="5"> There is a mistaken belief that only proofs in Fregean symbolism are rigorous.</Paragraph>
      <Paragraph position="6"> Although analogical representations can sometimes be implemented using Fregean ones, this does not imply that they are not used.</Paragraph>
    </Section>
  </Section>
class="xml-element"></Paper>
Download Original XML