Abstract
At present, many models focus on the field of blind barrier-free recognition, but most of them are simple visual recognition. Multimodal visual language models face the challenges of high computational cost and poor real-time performance in blind barrier-free recognition tasks, which limits their application on resource-constrained devices. Based on the LLaVA architecture, we proposed LLaVA-BindPW, an effective solution for optimizing the performance of blind assistance scenarios. This method requires few resources. The architecture is based on Gemma-7B. It achieves sparseness by replacing part of the feedforward network with a hybrid expert layer, and introduces a perceptual weighting mechanism to incorporate visual information into the expert weight, which greatly reduces reasoning. A segmented training strategy is adopted. We designed VQA tasks from the perspective of the blind on the IndoorBlindCap-1 K dataset made by the blind. From visual adaptation to multimodal capability enhancement, and then to the optimization of MoE fusion tasks for the blind, it can be combined with TTS language output in the future, and the results can be converted into a form that the blind can perceive to improve the interactive experience. Experiments show that while maintaining performance, this solution greatly improves efficiency and adapts to the needs of the blind, providing an efficient multimodal solution for barrier-free technology.