Abstract
Advances in diffusion-based and neural rendering architectures have enabled the creation of synthetic audiovisual content that closely replicates natural facial dynamics, speech production, and environmental context. These developments pose a growing risk to clinical medicine and dentistry, where authentic audiovisual data support remote clinical assessment, communication, and medico-legal documentation. This study introduces an interpretable multimodal framework for deepfake detection that integrates visual, acoustic, and cross-modal coherence features, with decision thresholds derived exclusively from authentic recordings to ensure transparency and forensic accountability. Using the DeepFake RealWorld dataset comprising 46,371 audiovisual clips, including 77% with audio, we evaluated 47 descriptors across optical, bioacoustic, and synchronization domains. Clinical relevance was evaluated through simulated dental teleconsultations. Cross-modal metrics, particularly lip-speech synchrony (Δp = 0.21-0.22), phoneme-viseme alignment (Δp = 0.21), a widely used audio visual consistency cue in multimodal deepfake detection, identity coherence (Δp = 0.19), and scene-audio semantic consistency (Δp = 0.18) demonstrated the strongest discriminatory performance, with prevalence ratios of up to 2.7. Acoustic markers, including reduced jitter, shimmer, and shortened reverberation time (RT60; 0.12 s in synthetic vs. 0.28 s in real recordings), provided additional robustness. The framework maintained performance degradation below 15% under platform-scale compression and recapture artifacts. Additionally, the proposed framework was benchmarked against a standard open-source texture-oriented baseline detector based on the Xception architecture, with clip-level ROC AUC and balanced accuracy reported on the original clips and under the same platform transformations used in the robustness analysis. Simulated dental teleconsultations revealed that manipulated recordings introduce inconsistencies in mandibular motion, prosody-related facial dynamics, and ambient acoustic plausibility (mean Δp = 0.18; PR = 2.3), confirming the clinical relevance of multimodal coherence analysis. These results position coherence-based detection as a reliable, transparent, and domain-appropriate approach for safeguarding audiovisual integrity in remote dentistry, medicine, and related digital health applications.