Abstract
Health monitoring of complex systems is critical for ensuring reliability and achieving cost-effective reusability. However, deploying deep learning models in this domain is impeded by two primary constraints: the scarcity of high-quality fault samples and the restricted computational resources available on-board. To address these challenges, this paper proposes a Physics-Topology-Anchored Learning (PTAL) framework. The core innovation lies in the effective integration of physical inductive bias into the model architecture. Specifically, PTAL incorporates a predefined adjacency matrix, derived from the physical mechanism, as a structural prior. This design anchors the neural network to explicit physical causality, effectively constraining the hypothesis space and reducing the model's dependency on large-scale data. Furthermore, by coupling this physics-informed structure with a lightweight recurrent attention mechanism, the model avoids the high computational overhead typical of generic large-scale networks. Experimental evaluations demonstrate that PTAL achieves a peak diagnostic accuracy of 97.8% and a low standard deviation of 0.1145, significantly outperforming baseline models in data-scarce regimes. The results confirm that the proposed model successfully leverages physical bias to maintain a favorable trade-off between diagnostic performance and computational efficiency, making it highly suitable for the resource-constrained environments of complex systems.