Abstract
Vision is a fundamental sense that profoundly impacts daily life and independence. For visually impaired people (VIP), the absence or impairment of this sense presents significant challenges, particularly in navigating their environment and identifying objects independently. A considerable challenge for visually impaired individuals is the inability to navigate independently and identify objects, which restricts their daily activities and routines. Various investigations have been conducted in the domain of real-time object detection (OD) using deep learning (DL). DL-based techniques are shown to attain performance in OD. In this manuscript, an Intelligent Object Detection-Based Assistive System Using Ivy Optimisation Algorithm (IODAS-IOA) model is proposed for VIPs. The aim is to develop an effective OD system to help visually impaired persons navigate their environment safely and independently. To achieve this, the image pre-processing stage initially employs the Gaussian filtering (GF) method to eliminate noise. Moreover, a novel YOLOv12 method is employed for the OD process. Furthermore, the DenseNet161 method is used for the feature extraction process. Additionally, the bidirectional gated recurrent unit with attention mechanism (BiGRU-AM) method is implemented for classifying the extracted images. Finally, the parameter tuning process is performed using the Ivy optimization algorithm (IOA) method to improve classification performance. Extensive experimentation was performed to validate the performance of the IODAS-IOA approach under the indoor OD dataset. The experimental results of the IODAS-IOA approach emphasized the superior accuracy value of 99.74% over recent techniques.