File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/98/p98-2165_concl.xml
Size: 1,795 bytes
Last Modified: 2025-10-06 13:58:10
<?xml version="1.0" standalone="yes"?> <Paper uid="P98-2165"> <Title>formance Computing and Communications in Healthcare (funded by the New York state Science and Technology Foundation under Grant</Title> <Section position="8" start_page="1007" end_page="1008" type="concl"> <SectionTitle> 6 Conclusion and Future Work </SectionTitle> <Paragraph position="0"> In this paper, we describe an effective way to automatically learn intonation rules. This work is unique and original in its use of linguistic features provided in a general purpose NLG tool to build intonation models. The machine-learned rules consistently performed well over all intonation features with accuracies around 90% for break index, phrase accent and boundary tone.</Paragraph> <Paragraph position="1"> For pitch accent, the model accuracy is around 80%. This yields a significant improvement over the baseline models and compares well with other TTS evaluations. Since we used different data set than those used in previous TTS experiments, we cannot accurately quantify the difference in results, we plan to carry out experiments to evaluate CTS versus TTS performance using the same data set in the future. We also designed an intonation generation architecture for our spoken language generation component where the intonation generation module dynamically applies newly learned rules to facilitate the updating of the intonation model.</Paragraph> <Paragraph position="2"> In the future, discourse and pragmatic information will be investigated based on the same methodology. We will collect a larger speech corpus to improve accuracy of the rules. Finally, an integrated spoken language generation system based on FUF/SURGE will be developed based on the results of this research.</Paragraph> </Section> class="xml-element"></Paper>