Heterogeneous graph transformer and diffusion model for disease diagnosis

用于疾病诊断的异构图变换器和扩散模型

阅读:1

Abstract

With the continuous development of Electronic Health Record (EHRs), medical heterogeneous data have become increasingly abundant, containing diverse types of entities and complex semantic relationships that provide essential support for disease diagnosis. However, conventional heterogeneous graph neural networks struggle to distinguish semantic differences among multi-type nodes and k-hop neighbors, often leading to semantic confusion and vulnerability to noise, which limits their classification performance and generalization capability. To tackle these challenges, we present a novel framework named Heterogeneous Graph Transformer and Diffusion mechanism for Disease Diagnosis (TD4DD), which captures multi-scale semantic dependencies across k-hop neighborhoods, while the diffusion module performs latent-space denoising to alleviate noise interference. Specifically, a k-hop hierarchical Transformer is introduced to capture multi-scale dependencies across different hop layers, enabling the differentiation of fine-grained semantic variations among neighbors at various distances. Additionally, a diffusion module is designed to handle noise in the data by performing denoising in the latent space through auxiliary subgraphs constructed using different meta-paths, thereby generating more discriminative node representations. Finally, the model fuses structural information with denoised embeddings to accomplish disease classification. Experiments conducted on two real-world clinical datasets, MIMIC-III(7,000 patients, 5 disease categories) and MIMIC-IV(8,331 patients, 6 disease categories), demonstrate that TD4DD consistently outperforms existing baseline methods in terms of both Micro-F1 and Macro-F1 scores, showing strong generalization ability. On MIMIC-III, TD4DD achieves a Micro-F1 of 88.29 and a Macro-F1 of 86.11, while on MIMIC-IV, it reaches 83.60 and 83.94, respectively. Furthermore, ablation studies and t-SNE visualizations further validate the effectiveness of each module and the distinguishing capability of the learned embeddings.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。