Robust Audio-Visual Speaker Localization in Noisy Aircraft Cabins for Inflight Medical Assistance

在嘈杂的飞机客舱中实现可靠的音视频说话人定位,以辅助机上医疗服务

阅读:1

Abstract

Active Speaker Localization (ASL) involves identifying both who is speaking and where they are speaking from within audiovisual content. This capability is crucial in constrained and acoustically challenging environments, such as aircraft cabins during in-flight medical emergencies. In this paper, we propose a novel end-to-end Cross-Modal Audio-Visual Fusion Network (CMAVFN) designed specifically for ASL under real-world aviation conditions, which are characterized by engine noise, dynamic lighting, occlusions from seats or oxygen masks, and frequent speaker turnover. Our model directly processes raw video frames and multi-channel ambient audio, eliminating the need for intermediate face detection pipelines. It anchors spatially resolved visual features with directional audio cues using a cross-modal attention mechanism. To enhance spatiotemporal reasoning, we introduce a dual-branch localization decoder and a cross-modal auxiliary supervision loss. Extensive experiments on public datasets (AVA-ActiveSpeaker, EasyCom) and our domain-specific AirCabin-ASL benchmark demonstrate that CMAVFN achieves robust speaker localization in noisy, occluded, and multi-speaker aviation scenarios. This framework offers a practical foundation for speech-driven interaction systems in aircraft cabins, enabling applications such as real-time crew assistance, voice-based medical documentation, and intelligent in-flight health monitoring.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。