An Explainable Deep Learning Framework for Multimodal Autism Diagnosis Using XAI GAMI-Net and Hypernetworks

基于 XAI GAMI-Net 和超网络的用于多模态自闭症诊断的可解释深度学习框架

阅读:1

Abstract

Background: Autism Spectrum Disorder (ASD) is a neurodevelopmental condition characterized by heterogeneous behavioral and neurological patterns, complicating timely and accurate diagnosis. Behavioral datasets are commonly used to diagnose ASD. In clinical practice, it is difficult to identify ASD because of the complexity of the behavioral symptoms, overlap of neurological disorders, and individual heterogeneity. Correct and timely identification is dependent on the presence of skilled professionals to perform thorough neurological examinations. Nevertheless, with developments in deep learning techniques, the diagnostic process can be significantly improved by automatically identifying and automatically classifying patterns of ASD-related behaviors and neuroimaging features. Method: This study introduces a novel multimodal diagnostic paradigm that combines structured behavioral phenotypes and structural magnetic resonance imaging (sMRI) into an interpretable and personalized framework. A Generalized Additive Model with Interactions (GAMI-Net) is used to process behavioral data for transparent embedding of clinical phenotypes. Structural brain characteristics are extracted via a hybrid CNN-GNN model, which retains voxel-level patterns and region-based connectivity through the Harvard-Oxford atlas. The embeddings are then fused using an Autoencoder, compressing cross-modal data into a common latent space. A Hyper Network-based MLP classifier produces subject-specific weights to make the final classification. Results: On the held-out test set drawn from the ABIDE-I dataset, a 20% split with about 247 subjects, the constructed system achieved an accuracy of 99.40%, precision of 100%, recall of 98.84%, an F1-score of 99.42%, and an ROC-AUC of 99.99%. For another test of generalizability, five-fold stratified cross-validation on the entire dataset yielded a mean accuracy of 98.56%, an F1-score of 98.61%, precision of 98.13%, recall of 99.12%, and an ROC-AUC of 99.62%. Conclusions: These results suggest that interpretable and personalized multimodal fusion can be useful in aiding practitioners in performing effective and accurate ASD diagnosis. Nevertheless, as the test was performed on stratified cross-validation and a single held-out split, future research should seek to validate the framework on larger, multi-site datasets and different partitioning schemes to guarantee robustness over heterogeneous populations.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。