Image-to-Audio Captioning Systems for Visually Impaired Users: Development and Application
DOI:
https://doi.org/10.53469/jrse.2026.08(02).13Keywords:
Object detection, deep learning, image processing, text extraction, speech synthesis, image captioning, ResNet - LSTM, accessibility, visually impaired, future scopeAbstract
This article provides a comprehensive overview of the evolving landscape of image captioning, with a focus on its applications in accessibility for the visually impaired. It explores the challenges of real - time object recognition, traditional object detection methods, and the transformative impact of deep learning techniques, particularly those employing region proposal object detection algorithms. The paper introduces Vision Voice, a groundbreaking web application that converts text extracted from images into natural - sounding speech. The article details the image processing pipeline, including preprocessing, segmentation, classification, and post - processing stages. It also delves into the mathematical concepts, image preprocessing techniques, and shortcomings of existing models. The study highlights the ResNet - LSTM models significant potential in generating descriptive and contextually coherent image captions, improving the quality of synthesized speech. Moreover, it discusses the future scope of the VisionVoice project, emphasizing the potential for continued advancements in accuracy, hardware capabilities, and the development of full Image - Speech conversion systems. The ultimate goal is to revolutionize accessibility and inclusion, providing visually impaired individuals with better access to information and a higher quality of life.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Pham Duc Hau

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Deprecated: json_decode(): Passing null to parameter #1 ($json) of type string is deprecated in /www/bryanhousepub/ojs/plugins/generic/citations/CitationsPlugin.inc.php on line 49

