File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/02/w02-0706_concl.xml
Size: 1,706 bytes
Last Modified: 2025-10-06 13:53:25
<?xml version="1.0" standalone="yes"?> <Paper uid="W02-0706"> <Title>Architectures for speech-to-speech translation using finite-state models</Title> <Section position="6" start_page="0" end_page="0" type="concl"> <SectionTitle> 4 Conclusions </SectionTitle> <Paragraph position="0"> Several systems have been implemented for speech-to-speech translation based on SFSTs. Some of them were implemented for translation from Italian to English and the others for translation from Spanish to English. All of them support all kinds of finite-state translation models and run on low-cost hardware.</Paragraph> <Paragraph position="1"> They are currently accessible through standard telephone lines with response times close to or better than real time.</Paragraph> <Paragraph position="2"> From the results presented, it appears that the integrated architecture allows for the achievement of better results than the results achieved with a serial architecture when enough training data is available to train the SFST. However, when the training data is insufficient, the results obtained by the serial architecture were better than the results obtained by the integrated architecture. This effect is possible because the source language models for the experiments with the serial architecture were smoothed trigrams. In the case of sufficient training data, the source language model associated to a SFST learnt by the MGTI or OMEGA is better than trigrams (Section 2.1). However, in the other case (not sufficient training data) these source languages were worse than trigrams. Consequently an important degradation is produced in the implicit decoding of the input utterance.</Paragraph> </Section> class="xml-element"></Paper>