Abstract
Scheduled maintenance and condition monitoring of power transformers in smart grids is mandatory to reduce their downtimes and maintain economic benefits. However, to minimize energy losses during inspection, non-invasive fault diagnosis techniques such as thermogram imaging can enable continuous monitoring of transformer health with minimal out-of-service time. Deep learning (DL) has proven to be a fast and efficient intelligent diagnostic tool. In this paper, a DL-based thermography method is proposed called Trans-Light for transformers' interturn faults detection and short-circuit severity identification. Trans-light extracts deep features from two deep layers of a convolutional neural network (CNN) rather than depending on one layer, thus obtaining more intricate patterns. Moreover, a Dual-tree Complex Wavelet Transform method is adopted which offers two enhancements. First, it acquires time-frequency knowledge besides the already obtained spatial information and second, it reduces the huge deep features dimensionality. Trans-light combines extracted deep features, then a feature selection process is applied to further reduce features' size, thus decreasing computation burden and reducing classification and training time. To validate the proposed scheme's diagnosis performance and robustness, different combinations of two CNN models, two feature selection methods, and six classifiers were tested, applying the proposed Trans-light framework, under noise-free and noise-existing conditions. Experimental results indicated that the combination of the LDA classifier, applied with the ResNet-18 CNN model and trained with merged deep features undergoing the chi-square (χ(2)) selection approach, attained superior performance under noise-free conditions. Compared to its counterparts in previous work, this configuration outperforms their performance since it uses the fewest features' number yet maintains 100% classification accuracy. Besides, it attained robust performance under two different noise natures again with minimal features' dimension, thus minimizing computational load and implementation complexity.