File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/01/w01-0815_concl.xml
Size: 1,595 bytes
Last Modified: 2025-10-06 13:53:07
<?xml version="1.0" standalone="yes"?> <Paper uid="W01-0815"> <Title>Evaluating text quality: judging output texts without a clear source</Title> <Section position="8" start_page="8" end_page="8" type="concl"> <SectionTitle> 6 Conclusions </SectionTitle> <Paragraph position="0"> So how can go about judging whether the products of NLG systems express the intended message? A first step towards this goal is to enable symbolic authors to satisfy themselves that they have built the domain model they had in mind. Graphical feedback is too difficult to interpret, while natural language output that is optimised for the end-reader may not show the unequivocal fidelity to the domain model that the symbolic author requires.</Paragraph> <Paragraph position="1"> We have suggested that textual feedback in a form close to a controlled language used for specifying software requirements is a good candidate for this task. We have further outlined a method for incrementally refining this controlled language by monitoring symbolic authors' ability to construct reference domain models on the basis of controlled language feedback. The trade-off between transparency and naturalness in the output text intended for the end-reader will involve design decisions based on, among other things, reader profiling.</Paragraph> <Paragraph position="2"> Assessing the fidelity of the end-reader text to the model is also a necessary step, but not one that can be conflated with or precede that of validating the accuracy of the model with respect to the author's intentions.</Paragraph> </Section> class="xml-element"></Paper>