Abstract
Learning from heterogeneous graphs under the constraints of both data scarcity and data privacy presents a significant challenge. While graph prompt learning offers a pathway for efficient few-shot adaptation, and federated learning provides a paradigm for decentralized training, their direct integration for heterogeneous graphs is non-trivial due to structural complexity and the need for rigorous privacy guarantees. This paper proposes FedHGPrompt, a novel federated framework that bridges this gap through a cohesive architectural design. Our approach introduces a three-layer model: a unification layer employing dual templates to standardize heterogeneous graphs and tasks, an adaptation layer utilizing trainable dual prompts to steer a frozen pre-trained model for few-shot learning, and a privacy layer integrating a cryptographic secure aggregation protocol. This design ensures that the central server only accesses aggregated updates, thereby cryptographically safeguarding individual client data. Extensive evaluations on three real-world heterogeneous graph datasets (ACM, DBLP, and Freebase) demonstrate that FedHGPrompt achieves superior few-shot learning performance compared to existing federated graph learning baselines (including FedGCN, FedGAT, FedHAN, and FedGPL) while maintaining strong privacy assurances and practical communication efficiency. The framework establishes an effective approach for collaborative learning on distributed, heterogeneous graph data where privacy is paramount.