Abstract
This article presents experimental fault diagnosis research of engines through the integration of bidirectional transfer learning, federated learning, and a new hybrid architecture of Transformer and Deep Neural Networks (DNNs) for out-of-distribution (OOD) testing. For the research purpose, two unique engine datasets were chosen: one for standard car engines and generators, and the other for engines of remote-controlled model cars and drones. For the initial step, bidirectional transfer learning through the employment of traditional DNNs was utilized to eliminate the domain difference among these heterogeneous engine datasets so that fault diagnosis can be effectively achieved for small and large engines. The second step was to develop a federated learning system to facilitate collaborative DNN model training on decentralized engine data to ensure data privacy with overall enhanced generalization. Lastly, the research proposes a federated transfer learning approach that employs Transformer-DNN hybrid architecture. The Transformer component, with positional encoding and multi-head attention mechanisms, was tailored to extract sequential dependencies in engine operational data, whereas the DNN part was dedicated to discriminative feature extraction for classification. Outstanding results show that transfer learning and federated transfer learning are highly effective in fault diagnosis of small engines; however, OOD testing based on the hybrid architecture substantially enhances accuracy in fault detection for large engines. Comparative experiments show significant improvements in classification accuracy and robustness in comparison to baseline DNN models. These results form a convincing basis for the development of scalable, privacy-preserving predictive maintenance systems that are flexible for deployment on a range of engine applications within both automotive and unmanned systems environments.