File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/evalu/90/w90-0122_evalu.xml
Size: 5,677 bytes
Last Modified: 2025-10-06 14:00:02
<?xml version="1.0" standalone="yes"?> <Paper uid="W90-0122"> <Title>The Computer Generation of Speech with Dlscoursally and Semantically Motivated Intonation</Title> <Section position="8" start_page="170" end_page="171" type="evalu"> <SectionTitle> 4. Conclusions 4.1, Overall Summary </SectionTitle> <Paragraph position="0"> The COMMUNAL project began with a hope that it would be possible to take the insights from a Hatlidayan-Tenchian view of intonation, and to develop a computational adaptation and implementation of them. A promising overall approach to the problem has indeed been developed; much of the resulting model has been worked out in considerable detail; and many large and significant portions have been implemented computationa!iy. The framework has proved itself to be adaptable when modifications are indicated, and there is good reason to hope that aspects not as yet worked out expficitly will prove to be solvable in the framework of the present model.</Paragraph> <Paragraph position="1"> There is, therefore, the exciting prospect that, when our sister project gets under way and provides the necessary complementary components (no doubt with some requirements on us to adapt our outputs to their needs) we shatl be in a position to offer a relatively full model of speech with diseoursally and semantically motivated Intonation. It will. moreover, be a principled model, and we hope that it will be capable of further extension and of fme.t, ni, g.</Paragraph> <Paragraph position="2"> We feel that the use of SFG. and specifically of the type that separates clearly system networks and realization rules (as in GENESYS), gives us a facility that is sensitive to the need for both extension and fine-t~m;-g. Above all, the centrality in the model of choice between semantic features makes it a natural formalism for relating the 'sentence grammar' to higher components in the overall model.</Paragraph> <Section position="1" start_page="170" end_page="171" type="sub_section"> <SectionTitle> 4.2. The General Prospect in NiX; </SectionTitle> <Paragraph position="0"> Finally, let me turn to a more general point, h appears that, incr~;ngly over the last few years, the focus of interest for many researchers in NLG has switched from what we might term sentence generation to hi~her level planning (which I term discourse 8eneraflon). It is here, one sometimes hears it said, that 'all the really interesting work' is being done. Going implicitly with r hk eJ~im is the assumption, which I have occasionally heard expressed quite explicitly, that the major problems of sentence generation have been solved.</Paragraph> <Paragraph position="1"> But is this really so? While a lot of very impressive work has been done, and while some quite large generators have been built (e.g. as reported in McDonald 1985, Mann and Mathiessen 1985, Fawcett 1990), very many major problems remain unresolved. Specifically, many important aspects of 'sentence gr~rnrnar ~ remain outside the scope of current generators. Where, for e~mple, will we fmd a full description of a semanticaRy and/or pragmatically motivated model of even such a wen-known syntactic phenomenon as the relative clause? And what about comparative constructions (where even the linguistics fiterature is weak)? And there are many, many more areas of the semantics and syntax of sentences where our models are still far from adequate. There are also many issues of model construction regarding, for example, the optimal division of labour between components, the outh'ni,g of which deserves a separate paper (or book). And, even if we had models that covered all these and the many other areas competently, we have hardly begun the process of developing adequate methods for the comparison and evaluation of models. Thus there is still an enormous amount of challenging and fascinating work to do before we can say with any confidence that we have an~hing like adequate sentence generators. (A senior figure in German NLP ch'des suggested at COLING '88 that one can buy good sentence generators off the shelf. It depends how good 'good' is*.) In this paper I have Ulustrated two crucial points: (1) that there are indeed significant areas of language not yet adequately covered in current generators, and (less dearly because I have had to omit the relevant section for reasons of space) (2) that the development of an adequate model of these depends on the eeaeerrent development of discourse and sentence generators.</Paragraph> <Paragraph position="2"> Clearly, while there are in e~dstence a number of fairly large sentence generators, we have in no way reached a situation where no further work needs to be done. I am aware, as the director of a project that seeks to provide rich coverage for as much of English as possible, that we have a great deal of work still to do, and that this holds for the sentence generator component as weft as for the discourse planning systems. GE~qESYS already has 50% more systems than the NIGEL (in the long established P e~am~n Project; see Appendix 1), but our rough estimate is that we need to make it at least as large again before we have an~hlng approaching full grammatical coverage. And, of course, as everyone who has wrestled seriously with genuine natural language knows, many tricky problems will remain even then. Fmding anything like the 'right' solution to many of these ~ require, I claim, models that have developed, m dose interaction with each other, their discourse planning and their sentence generation components and their belief relYresentation, including befiefs about the addressee.</Paragraph> </Section> </Section> class="xml-element"></Paper>