Tamil Text-To-Speech Synthesizer Using Festival Framework
The aim of this project is to develop a Text-to- Speech synthesizer in Tamil language which will be used by illiterate people. The secondary objective is to facilitate Tamil linguists to correct the pronunciation of the synthesizer. The pronunciation correction mechanism will improve the synthesizer over a period of time through collective experience of the Tamil linguists. The synthesizer is developed based on FESTIVAL framework. The framework facilitates the construction of high quality synthetic voice with substantially less amount of resources in a short span of time.The user interface that facilitates pronunciation prediction &correction is developed using Microsoft Visual Studio .NET.
Douglas O'Shaughnessy, “Speech Communications: Human and Machine”, Second Edition, 2000, pp. 346-359.
Black. A and Lenzo. K, “Multilingual Text-To-Speech Synthesis”,Proceedings of 2004 IEEE International Conference on Acoustics,Speech, and Signal Processing, 17-21 May 2004, pp. 761-764.
Black. A, Taylor. P and Caley. R, “The Festival Speech Synthesis System: System Documentation for Festival 1.4.1.”, University of Edinburgh, 2001.
Thierry Dutoit, “A Short Introduction to Text-to-Speech Synthesis”,Journal of Electrical & Electronics Engineering, Australia: Special Issue on Speech Recognition and Synthesis, vol. 17, December 1999,pp. 25-37.
T. Dutoit, V. Pagel, N. Pierret, O. van der Vreken, and F. Bataille, “The MBROLA project: Towards a set of high-quality speech synthesizers free of use for non-commercial purposes”, Proceedings of the fourth International Conference on Spoken Language, Philadelphia, PA., 1996, pp. 1393–1396.
G. L. Jayavardhana Rama, A. G. Ramakrishnan, R. Muralishankar and R Prathibha, “A complete text-to-speech synthesis system in Tamil”,Proceedings of 2002 IEEE Workshop on Speech Synthesis, 11-13 Sept.2002, pp. 191- 194.
Subramanian S and Ganesh K M, “Study and Implementation of a Tamil Text-to-Speech Engine”, Department of Computer Science and Engineering, PSG College of Technology, Coimbatore, July 2001.
Sridhar Krishna, N., Hema A. Murthy, Timothy A. Gonsalves, “Textto-Speech in Indian Languages”, Proceedings of International Conference on Natural Language Processing, Mumbai, India, 2002, pp.317–326.
Jian Yu and Jianhua Tao, “A novel prosody adaptation method for Mandarin concatenation-based text-to-speech system”, Proceedings of the Japan-China Joint Conference on Acoustics, 2009, pp. 33-41.
K. Sreenivasa Rao and B. Yegnanarayana, “Intonation modeling for Indian languages”, Journal of Computer Speech and Language, Volume 23, Issue 2, April 2009, pp. 240-256.
A G Ramakrishnan and Laxmi Narayana M, “Perception Experiments for Effective Unit replacement for Tamil TTS”, Proceedings of third Language and Technology Conference, Poznan, Poland, October 5-7,2007, pp. 236-240.
E.Veera Raghavendra, B. Yegnanarayana and Kishore Prahallad,"Speech Synthesis using Approximate Matching of Syllables",Proceedings of Spoken Language Technology Workshop, 15-19 Dec.2008, pp. 37-40.
Prahallad Kishore and Black Alan, "A text to speech interface for Universal Digital Library”, Journal of Zhejiang University – Science A,Zhejiang University Press, co-published with Springer-Verlag GmbH,Volume 6A, November 2005, pp. 1229-1234.
S.P. Kishore, R. Sangal and M. Srinivas, “Building Hindi and Telugu Voices using Festvox”, Proceedings of the International Conference on Natural Language Processing 2002 (ICON-2002), Mumbai, India, 2002,pp. 109-114.
R. Muralishankar and A G Ramakrishnan, "Synthesis of Speech with Emotion", Proceedings of International Conference on Communication,Computers and Devices, Kharagpur, Dec. 14-16, 2000, pp. 767-770.
R. I. Damper, Y. Marchand, M. J. Adamson and K. Gustafson,"Evaluating the pronunciation component of text-to speech systems for English: a performance comparison of different approaches",Proceedings of Speech and Language Technology (SALT) Club Workshop on Evaluation in Speech and Language Technology, 1997,pp. 155–176.
- There are currently no refbacks.
This work is licensed under a Creative Commons Attribution 3.0 License.