Fusion of deep transfer learning models with Gannet optimisation algorithm for an advanced image captioning system for visual disabilities

将深度迁移学习模型与 Gannet 优化算法融合,构建面向视觉障碍人士的高级图像描述系统

阅读:1

Abstract

The issue of generating a natural language explanation of images to define their visual content has garnered significant attention in computer vision (CV) and natural language processing (NLP). It is driven by applications such as image virtual assistants, indexing and retrieval, image perception, and assistance for visually challenged people. While this kind of person utilizes other senses, such as hearing and touch, for identifying events and objects, their quality of life is reduced to a typical level. Automated Image captioning generates captions that will be spoken aloud to individuals with disabilities, thereby recognizing objects and events happening nearby them. With the aid of image captioning techniques and artificial intelligence (AI) speech recognition methods, visually impaired individuals can quickly understand the content of an image, as these methods can automatically generate text captions that accurately describe the image's content. Therefore, this study presents a novel Fusion of Deep Transfer Learning Models and the Gannet Optimisation Algorithm for an Advanced Image Captioning System for Visual Disabilities (FDTLGO-AICSVD) model. The aim is to present a robust and efficient image captioning framework specifically designed to assist visually impaired persons through precise and descriptive image-to-text conversion. Initially, the FDTLGO-AICSVD approach comprises two distinct types of image preprocessing: noise removal and contrast enhancement, aimed at improving the clarity of visual features. Text preprocessing involves distinct steps to standardize and prepare the textual data for analysis. Furthermore, DenseNet121, VGG19, and MobileNetV2 models are utilized for extracting features from image data, whereas Term Frequency Inverse Document Frequency (TF-IDF) is applied for extracting features from text data. To achieve optimal performance, the Gannet optimization algorithm (GOA) model is employed for hyperparameter tuning, enabling the method to generate precise and context-aware captions. A wide range of experimentation of the FDTLGO-AICSVD method is performed under the Flickr8k and Flickr30k datasets. The comparison study of the FDTLGO-AICSVD method portrayed a superior BLEU-4 score of 45.11% over the Flickr8K dataset and 58.91% over the Flickr30K dataset, along with a significantly higher CIDEr score of 63.17 on Flickr8K and 69.81 on Flickr30K, demonstrating the enhanced descriptive accuracy and language generation capability of the model across both datasets.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。