Abstract
This paper presents a novel pipeline leak diagnosis framework that combines Savitzky-Golay scalograms with a lightweight advanced deep learning architecture. Pipelines are critical for transporting fluids and gases, but leaks can lead to operational disruptions, environmental hazards, and financial losses. Leak events generate acoustic emissions (AE), captured as transient signals by AE sensors; however, these signals are often masked by noise and affected by the transported medium. To overcome this challenge, a fluid-independent detection approach is proposed that begins with acquiring AE data under various operational conditions, including multiple intensities of pinhole leaks and normal states. The transient signals are transformed into detailed scalograms using the Continuous Wavelet Transform (CWT), revealing subtle time-frequency patterns associated with leak events. To enhance these leak-specific features, a targeted Savitzky-Golay (SG) filter is applied, producing refined Savitzky-Golay scalograms (SG scalograms). These SG scalograms are then used to train a Convolutional Neural Network (CNN) and a newly developed lightweight Vision Transformer with streamlined self-attention (LViT-S), which autonomously learn both local and global features. The LViT-S achieves reduced embedding dimensions and fewer Transformer layers, significantly lowering computational cost while maintaining high performance. Extracted local and global features are merged into a unified feature vector, representing diverse visual characteristics learned by each network through their respective loss functions. This comprehensive feature representation is then passed to an Artificial Neural Network (ANN) for final classification, accurately identifying the presence, severity, and absence of leaks. The effectiveness of the proposed method is evaluated under two different pressure conditions, two fluid types (gas and water), and three distinct leak sizes, achieving a high classification accuracy of 98.6%. Additionally, a comparative evaluation against four state-of-the-art methods demonstrates that the proposed framework consistently delivers superior accuracy across diverse operational scenarios.