Open Access Open Access  Restricted Access Subscription or Fee Access

Static Video Based Visual-Verbal Exemplar for Recognizing Gestures of Indian Sign Language

P.V.V. Kishore, P. Rajesh Kumar, A. Arjuna Rao

Abstract


The paper presents a system developed for recognizing gestures of Indian sign language from images of gestures. The proposed system is based on Elliptical Fourier descriptors and neural networks used for gesture pattern recognition. Unlike the systems proposed by other researchers such as using a radio frequency or colored gloves to achieve the recognition our system does not impose any such constraints. Features are extracted from the videos of signers using elliptical Fourier descriptors and principal component analysis which greatly reduces the size of the feature vector. Neural networks error back propagation algorithm is used to recognize gestures Indian sign language. The system converts the recognized gesture in to voice and text messages. The system was implemented with 440 sample videos of gestures of alphanumeric characters and words with a maximum of 5 videos per gesture. Experimental results show that the neural network is able to recognize gestures and convert them to voice messages with an accuracy of 92.52%.

Keywords


Sign Language Recognition, Artificial Neural Networks, Elliptical Fourier Descriptors, Canny Edge Detector

Full Text:

PDF

References


M.K Bhuyan, D. Ghoah and P.K. Bora (2006), ‘A Framework for Hand Gesture Recognition with Applications to Sign Language’, Annual India Conference, pp. 1-6.

T. Starner and A. Pentland (1995), ‘Real-time American Sign Language recognition from video using hidden markov models’, International Symposium on Computer Vision, pp. 265-270.

R. Liang and M. Ouhyoung (1998), ‘Real-time continuous gesture recognition system for sign language’, Proc Third. IEEE International Conf: on Automatic Face and Gesture Recognition, pp. 558-567.

Eng-Jon Ong and Richard Bowden (2004), ‘A Boosted Classifier Tree for Hand Shape Detection’, Sixth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 889-894.

Chan-Su Lee, Zeungnam Bien, Gyu-Tae Park, Won Jang, Jong-Sung Kim and Sung-Kwon Kim (1997), ‘Real-time recognition system of Korean sign language based on elementary components’, Sixth IEEE International Conference on Fuzzy Systems, Vol 24, pp.1463-1468.

Wilson, A. and A. Bobick (1995), ‘Configuration States for the Representation and Recognition of Gesture’. International Workshop on Automatic-Face and Gesture-Recognition, Zurich, Switzerland, pp. 129-134.

Noor Saliza Mohd Salleh, Jamilin Jais, Lucyantie Mazalan, Roslan Ismail, Salman Yussof, Azhana Ahmad, Adzly Anuar and Dzulkifli Mohamad (2006), ‘Sign Language to Voice Recognition: Hand Detection Techniques for Vision-Based Approach’, Fourth International Conference on Multimedia and Information and Communication in Education, pp. 967.

Rini Akmeliawati, Melanie Po-Leen Ooi and Ye Chow Kuang (2007), ‘Real-Time Malaysian Sign Language Translation Using Colour Segmentation and NeuralNetwork’, IEEE on Instrumentation and Measurement Technology Conference Proceeding, Warsaw, Poland, pp. 1-6.

Eun-Jung Holden, Gareth Lee and Robyn Owens (2005), ‘Australian sign language recognition’, Machine Vision and Applications, Vol.16, No. 5, pp. 312-320.

Nariman Habili, Cheng Chew Lim and Alireza Moini (2004), ‘Segmentation Of The Face And Hands In Sign Language Video Sequences Using Color And Motion Cues’, IEEE Transactions on Circuits and Systems For Video Technology, Vol. 14, No. 8, , pp.1086 – 1097

George Awad, Junwei Han and Alistair Sutherland (2006), ‘A Unified System for Segmentation and Tracking of Face and Hands in Sign Language Recognition’, 18th International Conference on Pattern Recognition, Vol. 1, pp. 239 – 242.

S.N. Sivanandam, M. Paulraj (2003), An Introduction to Neural Networks, Vikhas Publications Company Ltd.India.

Indian Sign language, empowering the deaf,.

ASL corpus: .

Christopoulos, C., Bonvillian, J., 1985. Sign language. Journal of Communication Disorders 18, 1–20.

Atherton, M., 1999. Welsh today BSL tomorrow. In: Deaf Worlds 15 (1),pp. 11–15.

Engberg-Pedersen, E., 2003. From pointing to reference and predication: pointing signs, eyegaze, and head and body orientation in Danish Sign Language. In: Kita, Sotaro (Ed.), Pointing: Where Language, Culture, and Cognition Meet. Erlbaum, Mahwah, NJ,pp. 269–292.

Nyst, V., 2004. Verbs of motion in Adamorobe Sign Language. Poster. In:TISLR 8 Barcelona, September 30–October 2. Programme and Abstracts. (Internat. Conf. on Theoretical Issues in Sign Language Research; 8), pp. 127–129.

Abdel-Fattah, M.A., 2005. Arabic sign language: a perspective. Journal of Deaf Studies and Deaf Education 10 (2), 212–221.

Masataka, N. et al., 2006. Neural correlates for numerical processing in the manual mode. Journal of Deaf Studies and Deaf Education 11 (2), 144–152.

Paulraj M P, Sazali Yaacob, Hazry Desa.,(2008),’Extraction of Head and Hand Gesture Features for Recognition of sign Language’, IEEE on Electronic Design International Conference proceedings, Penang, Malaysia, pp 1-6.

Qutaishat Munib, Moussa Habeeb, (2005),’American Sign Language(ASL) recognition based on Hough Transform and Neural Networks’, Expert Systems with Applications Journal, Vol 10,pp24-38.


Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.