Open Access Open Access  Restricted Access Subscription or Fee Access

Fast Edge-Based Arabic Sign Language Recognition Using Probablistic Neural Network

Elsayed E. Hemayed, Allam S. Hassanien

Abstract


This paper introduces a new prototype system of sign to voice recognition technique to recognize Arabic signs and converts them into voice correspondences to enable Arabian deaf people to interact with normal people. The proposed technique captures a color image for the hand gesture and converts it into YCbCr color space that provides an efficient and accurate way to extract skin regions from colored images under various illumination changes. Prewitt edge detector is used to extract the edges of the segmented hand gesture. For its fast training process, probabilistic neural network (PNN) is used at classification stage where it uses a supervised training set to develop distribution functions within the pattern layer. These functions, in the recall mode, are used to estimate the likelihood of an input feature vector being part of a learned class. The nearest class with maximum score is selected and the corresponding sound clip is played. The proposed technique is used to recognize Arabic sign language alphabets and the most common Arabic gestures. Specifically, we applied the technique to 106 different signs and gestures with an average accuracy of 97.5% for three different signers at different situations. The proposed technique was also applied successfully to recognize Arabic fingerspelling. The detailed of the proposed technique and the experimental results are discussed in this paper.


Keywords


Arabic Sign Language, Fingerspelling, Gesture Recognition, Probabilistic Neural Network, Sign-to-Voice, Skin Color Segmentation.

Full Text:

PDF

References


S. Fels and G. Hinton, “Glove-Talk: a neural network interface between a data-glove and a speech synthesizer”, IEEE Transaction on Neural Networks, vol. 4, no. 1, pp. 2–8, 1993.

J.S. Kim, W. Jang, and Z. Bien, “A dynamic gesture recognition system for the Korean sign language (KSL)”, IEEE Transactions on Systems, Man, and Cybernetics B, vol. 26, no. 2, pp. 354-359, 1996.

C. Lee, Z. Bien, G. Park, W. Jang, J. Kim, and S. Kim, “Real-time recognition system of Korean sign language based on elementary components”, in IEEE International Conference on Fuzzy Systems, 1997, pp. 1463-1468.

Y. Tabata and T. Kuroda, “Finger spelling recognition using distinctive features of hand shape”, The Seventh International Conference on Disability, Virtual Reality and Associated Technologies with Art Abilitation, Maia, Portugal, 2008, pp. 287-292.

S. Halawani, “Arabic sign language translation system on mobile devices”, International Journal of Computer Science and Network Security, vol. 8, no. 1, pp. 251-256, 2008.

K. Tsukada, and M. Yasumura, “Ubi-Finger: a Simple Gesture Input Device for Mobile and Ubiquitous Environment”, Journal of Asian Science Life, vol. 2, no. 2, pp. 111-120, 2004.

A. K. Alvi, M. Azhar, M. Usman, S. Mumtaz, S. Rafiq, R. Rehman, I. Ahmed, "Pakistan Sign Language Recognition Using Statistical Template technology, Rome, Italy, 2005, pp. 52-55.

S. Al-Buraiky, Arabic sign language recognition using an instrumented glove, Master Thesis, King Fahd University of Petroleum & Minerals, Saudi Arabia, 2004.

X. Zabulis, H. Baltzakis, and A. Argyros, Vision-based Hand Gesture Recognition for Human-Computer Interaction, C. Stephanidis Ed. The Universal Access Handbook - Human Factors and Ergonomics Series,Boca Raton, FL, USA, CRC Press, 2009, pp. 1-30.

A. Khaled and M. Al-Rousan, “Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers”, EURASIP J Appl Signal Process, vol. 13, pp. 2136–2145, 2005.

Y. Hamada, N. Shimada, and Y. Shirai, “Hand shape estimation using sequence of multi-ocular images based on transition network”,Proceedings of International Conference on Vision Interface, 2002, pp.362-368.

R. Feris, M. Turk, R. Raskar, K. Tan, and G. Ohashi, “ Exploiting depth discontinuities for vision-based finger spelling recognition”, Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop, vol. 10, 2004.

K. Grobel and M. Assan, “Isolated sign language recognition using hidden Markov models”, IEEE International Conference on Systems, Man, and Cybernetics, 1997, pp. 162-167.

P. Dreuw, D. Rybach, T. Deselaers, M. Zahedi, and H. Ney, “Speech recognition techniques for a sign language recognition system”,Interspeech, Antwerp, Belgium, pp. 2513-2516, 2007.

N.S. Salleh, J. Jais, L. Mazalan, R. Ismail, S. Yussof, A. Ahmad, et al.,“Sign Language to Voice Recognition: Hand Detection Techniques for Vision-Based Approach”, International Conference on Multimedia ICT Education, Seville, Spain, 2006, pp. 967-972.

N. Tanibata, N. Shimada, and Y. Shirai, “Extraction of hand features for recognition of sign language words”, International Conference on Vision Interface, 2002, pp. 391–398.

M. Mohandes, “Automatic Translation of Arabic Text to Arabic Sign Language”, International Journal of Artificial Intelligence and Machine Learning, vol. 6, no. 4, pp. 15–19, 1996.

M. Mohandes, “Arabic sign language recognition”, Proceedings of International Conference on Image Science and System Technology, Las Vegas, Nevada, USA, 2001, pp. 25-28.

O.M. Foong, T.J. Low, and S. Wibowo, “Hand Gesture Recognition: Sign to Voice System”, International Journal of Electronics, Computer and System Engineering, vol. 3, no. 4, pp. 198-202, 2009.

R. Gonzalez and R. Woods, Digital image processing 3/E, Prentice Hall, 2007.

D. F. Specht, “Probabilistic Neural Networks”, Neural Network, vol. 3,pp. 109-118, 1990.


Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.