Abstract
In recent decades, various assistive technologies have emerged to support visually impaired individuals. However, there remains a gap in terms of solutions that provide efficient, universal, and real-time capabilities by combining robust object detection, robust communication, continuous data processing, and emergency signaling in dynamic environments. In many existing systems, trade-offs are made in range, latency, or reliability when applied in changing outdoor or indoor scenarios. In this study, we propose a comprehensive framework specifically tailored for visually impaired people, integrating computer vision, edge computing, and a dual-channel communication architecture including low-power wide-area network (LPWAN) technology. The system utilizes the YOLOv5 deep-learning model for the real-time detection of obstacles, paths, and assistive tools (such as the white cane) with high performance: precision 0.988, recall 0.969, and mAP 0.985. Implementation of edge-computing devices is introduced to offload computational load from central servers, enabling fast local processing and decision-making. The communications subsystem uses Wi-Fi as the primary link, while a LoRaWAN channel acts as a fail-safe emergency alert network. An IoT-based panic button is incorporated to transmit immediate location-tagged alerts, enabling rapid response by authorities or caregivers. The experimental results demonstrate the system's low latency and reliable operations under varied real-world conditions, indicating significant potential to improve independent mobility and quality of life for visually impaired people. The proposed solution offers cost-effective and scalable architecture suitable for deployment in complex and challenging environments where real-time assistance is essential.