An efficient transformer architecture with depthwise separable convolutions for high-accuracy underwater acoustic target recognition

一种高效的Transformer架构,采用深度可分离卷积,用于高精度水下声学目标识别。

阅读:1

Abstract

Underwater Acoustic Target Recognition (UATR) plays a vital role in maritime security and defense, requiring accurate and efficient classification of marine vessels based on their sonar acoustic emissions. Traditional recognition systems rely on handcrafted features and shallow classifiers, which often struggle with complex acoustic patterns and impose substantial computational overhead. Although deep learning methods have improved recognition accuracy, their high computational demands hinder real-time deployment on resource-constrained platforms. We propose the Depthwise Separable Convolutional Multihead Transformer (DCMT), which combines depthwise separable convolutions for localized feature extraction with multi-head self-attention Transformer branches for global contextual modeling. The model has dual Transformer’s parallel branches having 4-head and 8-head structures for additional complementary feature processing, and their outputs are merged using Global Average and Max Pooling to form a more potent feature vector. The model incorporates the following acoustic features: Zero Crossing Rate (ZCR), Root Mean Square-Energy (RMS-Energy), Mel-Frequency Cepstral Coefficients (MFCCs), and Chroma. To enhance generalization of DCMT model, CutMix data augmentation is used to synthetically increase data variability through combining audio segments from different classes. The proposed DCMT model with 0.7 million parameters was evaluated on the public benchmark datasets DeepShip and ShipsEar achieving 0.847 Flops (G) and 0.94 Flops (G), respectively, and it reached a classification accuracy of 97.53% and 98.19%. The model also attained even better classification accuracy on the QiandaoEar22 datasets (SpeedBoat, KaiYuan, and UUV) which are 95.08%, 98.24% and 99.68%, respectively. Moreover, the proposed DCMT model achieves an average inference time of 3.8 ms and 131.6 FPS, which outperform existing models and the baseline UTAR-Transformer (4.3 ms and 230.8 FPS) in both speed and accuracy. This lightweight model exhibits strong potential for real-time deployment in marine environments.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。