Open Access Open Access  Restricted Access Subscription or Fee Access

Compact Vision Sensor Based Assistive Text Reading Technology for Visually Blind Persons

F. Destonius Dhiraviam, J. Saranya, E. Sivasankari, S. Susikala

Abstract


We propose a cam based assistive content perusing system to help blind persons read content names and item bundling from hand-held questions in their day by day lives. To confine the item from jumbled foundations or other encompassing protests in the cam view, we first propose a proficient and compelling movement based system to characterize a locale of investment (return for capital invested) in the feature by asking the client to shake the article. This system concentrates moving article locale by a mixture-of-Gaussians-based foundation sub-footing strategy. In the removed return for capital invested, content restriction and recognition are directed to secure content data. To consequently confine the content districts from the item return for money invested, we propose a novel content limitation calculation by learning angle gimmicks of stroke introductions and circulations of edge pixels in an Adaboost model. Content characters in the confined content areas are then binarized and perceived by off-the-rack optical character distinguishment delicate product. The perceived content codes are yield to visually impaired clients in discourse. Execution of the proposed content limitation calculation is quantitatively assessed on ICDAR-2003 and ICDAR-2011 Strong Reading Datasets. Exploratory results exhibit that our calculation accomplishes the condition of human expressions. The verification of-idea model is additionally assessed on a dataset gathered utilizing ten visually impaired persons to assess the viability of the framework's equipment. We investigate client between face issues and survey strength of the calculation in removing and perusing content from distinctive articles with complex foundations.


Keywords


Assistive Devices, Visual Insufficiency, Scattering of Edge Pixels, Hand-Held Things, Optical Character Recognition (OCR), Stroke Presentation, Substance Scrutinizing, Content Zone Confinement.

Full Text:

PDF

References


Chucai Yi, Yingli Tian, and Aries Arditi, “Portable camera based assistive text and product label reading from hand-held objects for blind persons”,vol. 19,no,3, pp. 1487-1492, June 2014

X. Chen, J. Yang, J. Zhang, and A. Waibel, “Automatic detection and recognition of signs from natural scenes,” IEEE Trans. Image Process., vol. 13, no. 1, pp. 87–99, Jan. 2004.

D. Dakopoulos and N. G. Bourbakis, “Wearable obstacle avoidance electronic travel aids for blind: A survey,” IEEE Trans. Syst., Man, Cybern., vol. 40, no. 1, pp. 25–35, Jan. 2010.

B. Epshtein, E. Ofek, and Y. Wexler, “Detecting text in natural scenes with stroke width transform,” in Proc. Comput. Vision Pattern Recognit., 2010, pp. 2963–2970.

Y. Freund and R. Schapire, “Experiments with a new boosting algorithm,” in Proc. Int. Conf. Machine Learning, 1996, pp. 148–156.

N. Giudice and G. Legge, “Blind navigation and the role of technology,” in The Engineering Handbook of Smart Technology for Aging, Disability, and Independence, A. A. Helal, M. Mokhtari, and B. Abdulrazak, Eds. Hoboken, NJ, USA: Wiley, 2008.

A. Shahab, F. Shafait, and A. Dengel, “ICDAR 2011 robust reading com-petition: ICDAR Robust Reading Competition Challenge 2: Reading text in scene images,” in Proc. Int. Conf. Document Anal. Recognit., 2011, 1491–1496.

K. Kim, K. Jung, and J. Kim, “Texture-based approach for text detection in images using support vector machines and continuously adaptive mean shift algorithm,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 12, 1631–1639, Dec. 2003.

KReader Mobile User Guide, knfb Reading Technology Inc. (2008). [On-line]. Available: http://www.knfbReading.com

S. Kumar, R. Gupta, N. Khanna, S. Chaudhury, and S. D. Joshi, “Text extraction and document image segmentation using matched wavelets and MRF model,” IEEE Trans Image Process., vol. 16, no. 8, pp. 2117–2128, Aug. 2007.

C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” presented at the IEEE Comput. Soc. Conf. Comput. Vision Pattern Recognit., Fort Collins, CO, USA, 1999.

J. Zhang and R. Kasturi, “Extraction of text objects in video documents: recent progress,” in Proc. IAPR Workshop Document Anal. Syst., 2008, pp. 5–17.

X. Yang, Y. Tian, C. Yi, and A. Arditi, “Context-based indoor object detection as an aid to blind persons accessing unfamiliar environments,” in Proc. ACM Multimedia, 2010, pp. 1087–1090.

C. Yi and Y. Tian, “Assistive text reading from complex background for blind persons,” in Proc. Int. Workshop Camera-Based Document Anal. Recognit., 2011, vol. LNCS-7139, pp. 15–28.

C. Yi and Y. Tian, “Text detection in natural scene images by stroke gabor words,” in Proc. Int. Conf. Document Anal. Recognit., 2011, pp. 177–181.

C. Yi and Y. Tian, “Text string detection from natural scenes by structure based partition and grouping,” IEEE Trans. Image Process., vol. 20, no. 9, 2594–2605, Sep. 2011.


Refbacks

  • There are currently no refbacks.