Lightweight visual accessibility LLaVA architecture

轻量级视觉可访问性 LLaVA 架构

阅读:1

Abstract

At present, many models focus on the field of blind barrier-free recognition, but most of them are simple visual recognition. Multimodal visual language models face the challenges of high computational cost and poor real-time performance in blind barrier-free recognition tasks, which limits their application on resource-constrained devices. Based on the LLaVA architecture, we proposed LLaVA-BindPW, an effective solution for optimizing the performance of blind assistance scenarios. This method requires few resources. The architecture is based on Gemma-7B. It achieves sparseness by replacing part of the feedforward network with a hybrid expert layer, and introduces a perceptual weighting mechanism to incorporate visual information into the expert weight, which greatly reduces reasoning. A segmented training strategy is adopted. We designed VQA tasks from the perspective of the blind on the IndoorBlindCap-1 K dataset made by the blind. From visual adaptation to multimodal capability enhancement, and then to the optimization of MoE fusion tasks for the blind, it can be combined with TTS language output in the future, and the results can be converted into a form that the blind can perceive to improve the interactive experience. Experiments show that while maintaining performance, this solution greatly improves efficiency and adapts to the needs of the blind, providing an efficient multimodal solution for barrier-free technology.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。