Generative AI-driven synthetic media risks in digital health: implications for telemedicine and teledentistry

生成式人工智能驱动的合成媒体在数字健康领域的风险:对远程医疗和远程牙科的影响

阅读:2

Abstract

Advances in diffusion-based and neural rendering architectures have enabled the creation of synthetic audiovisual content that closely replicates natural facial dynamics, speech production, and environmental context. These developments pose a growing risk to clinical medicine and dentistry, where authentic audiovisual data support remote clinical assessment, communication, and medico-legal documentation. This study introduces an interpretable multimodal framework for deepfake detection that integrates visual, acoustic, and cross-modal coherence features, with decision thresholds derived exclusively from authentic recordings to ensure transparency and forensic accountability. Using the DeepFake RealWorld dataset comprising 46,371 audiovisual clips, including 77% with audio, we evaluated 47 descriptors across optical, bioacoustic, and synchronization domains. Clinical relevance was evaluated through simulated dental teleconsultations. Cross-modal metrics, particularly lip-speech synchrony (Δp = 0.21-0.22), phoneme-viseme alignment (Δp = 0.21), a widely used audio visual consistency cue in multimodal deepfake detection, identity coherence (Δp = 0.19), and scene-audio semantic consistency (Δp = 0.18) demonstrated the strongest discriminatory performance, with prevalence ratios of up to 2.7. Acoustic markers, including reduced jitter, shimmer, and shortened reverberation time (RT60; 0.12 s in synthetic vs. 0.28 s in real recordings), provided additional robustness. The framework maintained performance degradation below 15% under platform-scale compression and recapture artifacts. Additionally, the proposed framework was benchmarked against a standard open-source texture-oriented baseline detector based on the Xception architecture, with clip-level ROC AUC and balanced accuracy reported on the original clips and under the same platform transformations used in the robustness analysis. Simulated dental teleconsultations revealed that manipulated recordings introduce inconsistencies in mandibular motion, prosody-related facial dynamics, and ambient acoustic plausibility (mean Δp = 0.18; PR = 2.3), confirming the clinical relevance of multimodal coherence analysis. These results position coherence-based detection as a reliable, transparent, and domain-appropriate approach for safeguarding audiovisual integrity in remote dentistry, medicine, and related digital health applications.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。