Open Access Open Access  Restricted Access Subscription or Fee Access

Portable Camera Based Assistive Text Reading From Hand Held Objects for Blind Persons

D. Sandhiya, K. M. Shynimol, T. Tamil Alagi, R. M. Vidhya

Abstract


We advise a camera-based assistive textual content analyzing framework to help blind people to read text labels from handheld gadgets of their day to day lives. In this paper Camera acts as main imaginative and prescient to capture the photograph of product packaging and hand-held objects. To isolate the item from complex backgrounds, we first suggest an effective movement-primarily based method to outline a place of interest (ROI) in the photograph. In the extracted ROI, text localization and reputation are conducted to collect text facts. Then text characters are recognized by off-the-shelf optical person popularity (OCR) software program. Using text to speech convertor the extracted texts are output in audio output.


Keywords


Camera-Based Assistive Textual Content Reading, Movement-Based Totally Technique, Text Localization and Popularity, Off-The-Shelf Optical Person Reputation.

Full Text:

PDF

References


NationalCensusofIndiahttp://censusindi a.gov.in/Census_And_You/ disabled_population.aspx

KReader Mobile User Guide, knfb Reading Technology Inc. (2008). [Online]. Available: http://www.knfbReading.com

K. Jung, K.I. Kim, A.K. Jain, “Text information extraction in images and video: a survey”, PA

X. Chen and A. L. Yuille. “Detecting and reading text in natural scenes.” Computer Vision and Pattern Recognition, IEEE Computer Society Conference on, 2:366–373, 2004.

R. Lienhart and A. Wernicke, “Localizing and segmenting text in images and videos,” Circuits and Systems for Video Technology, vol. 12, no. 4, pp. 256 –268, 2002.

C. Yi and Y. Tian, “Text string detection from natural scenes by structure based partition and grouping,” IEEE Trans. Image Process., vol. 20, no. 9, pp. 2594–2605, Sep. 2011.

L. Ma, C. Wang, and B. Xiao, “Text detection in natural images based on multi- scale edge detection and classification,” in Proc. Int. Congr. ImageSignal Process., 2010, vol. 4, pp. 1961–1965.

Nanayakkara, S., Shilkrot, R., Yeo, K. P., and Maes, P.” EyeRing: a finger-worn input device for seamless interactions with our surroundings”. In Augmented Human, 2013.

Carlos Merino-Gracia,, Karel Lenc,Majid Mirmehdi, " A Head- Mounted Device for Recognizing Text in Natural Scenes “Lecture Notes in Computer Science Volume 7139, pp 29-41, 2012


Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.