MECO: Mixture-of-Expert Codebooks for Multiple Dense Prediction Tasks

MECO:用于多密集预测任务的混合专家编码簿

阅读:1

Abstract

Autonomous systems operating in embedded environments require robust scene understanding under computational constraints. Multi-task learning offers a compact alternative to deploying multiple task-specific models by jointly solving dense prediction tasks. However, recent MTL models often suffer from entangled shared feature representations and significant computational overhead. To address these limitations, we propose Mixture-of-Expert Codebooks (MECO), a novel multi-task learning framework that leverages vector quantization to design Mixture-of-Experts with lightweight codebooks. MECO disentangles task-generic and task-specific representations and enables efficient learning across multiple dense prediction tasks such as semantic segmentation and monocular depth estimation. The proposed multi-task learning model is trained end-to-end using a composite loss that combines task-specific objectives and vector quantization losses. We evaluate MECO on a real-world driving dataset collected in challenging embedded scenarios. MECO achieves a +0.4% mIoU improvement in semantic segmentation and maintains comparable depth estimation accuracy to the baseline, while reducing model parameters and FLOPs by 18.33% and 28.83%, respectively. These results demonstrate the potential of vector quantization-based Mixture-of-Experts modeling for efficient and scalable multi-task learning in embedded environments.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。