Abstract
With the continuous development of Electronic Health Record (EHRs), medical heterogeneous data have become increasingly abundant, containing diverse types of entities and complex semantic relationships that provide essential support for disease diagnosis. However, conventional heterogeneous graph neural networks struggle to distinguish semantic differences among multi-type nodes and k-hop neighbors, often leading to semantic confusion and vulnerability to noise, which limits their classification performance and generalization capability. To tackle these challenges, we present a novel framework named Heterogeneous Graph Transformer and Diffusion mechanism for Disease Diagnosis (TD4DD), which captures multi-scale semantic dependencies across k-hop neighborhoods, while the diffusion module performs latent-space denoising to alleviate noise interference. Specifically, a k-hop hierarchical Transformer is introduced to capture multi-scale dependencies across different hop layers, enabling the differentiation of fine-grained semantic variations among neighbors at various distances. Additionally, a diffusion module is designed to handle noise in the data by performing denoising in the latent space through auxiliary subgraphs constructed using different meta-paths, thereby generating more discriminative node representations. Finally, the model fuses structural information with denoised embeddings to accomplish disease classification. Experiments conducted on two real-world clinical datasets, MIMIC-III(7,000 patients, 5 disease categories) and MIMIC-IV(8,331 patients, 6 disease categories), demonstrate that TD4DD consistently outperforms existing baseline methods in terms of both Micro-F1 and Macro-F1 scores, showing strong generalization ability. On MIMIC-III, TD4DD achieves a Micro-F1 of 88.29 and a Macro-F1 of 86.11, while on MIMIC-IV, it reaches 83.60 and 83.94, respectively. Furthermore, ablation studies and t-SNE visualizations further validate the effectiveness of each module and the distinguishing capability of the learned embeddings.