Attention-driven complementary information fusion network for sparse photoacoustic image reconstruction

基于注意力机制的互补信息融合网络用于稀疏光声图像重建

阅读:3

Abstract

Photoacoustic tomography (PAT) is an emerging biomedical imaging modality that uniquely combines high spatial resolution with deep tissue penetration in a non-invasive manner, holding significant promise for diverse applications. However, image reconstruction quality in PAT severely degrades under limited-view data acquisition scenarios, such as those imposed by the physical constraints of intracavitary imaging. Conventional reconstruction methods (e.g., Delay-and-Sum, DAS) under these conditions typically yield images plagued by severe artifacts and loss of fine structural details. While deep learning (DL) approaches offer some improvement, existing post-processing methods still struggle to accurately recover intricate anatomical features from severely undersampled, limited-view data, often resulting in blurred details or persistent artifacts. To address these critical limitations, we propose DUAFF-Net, a novel dual-stream deep learning architecture. DUAFF-Net uniquely processes two complementary input representations in parallel: 1) conventional DAS reconstructions, and 2) pixel-wise interpolated raw data. The network employs a sophisticated two-stage feature fusion strategy to maximize information extraction and synergy. In the first stage, the Multi-scale Information Aggregation and Feature-refinement Module (MIAF-Module) enables early-stage cross-modal information complementarity and feature enhancement. Subsequently, the Global Context and Deep Fusion Module (GCDF-Module) focuses on holistic feature optimization and deep integration across the streams. These modules work synergistically to progressively refine the reconstruction. Extensive experiments on simulated PAT datasets of retinal vasculature and complex brain structures, as well as an in vivo mouse abdomen dataset, demonstrate that DUAFF-Net robustly generates high-quality images even under highly incomplete data conditions. Quantitative evaluation shows that DUAFF-Net achieves substantial improvements over the standard DAS algorithm, with gains of ∼18.38 dB in Peak Signal-to-Noise Ratio (PSNR) and ∼0.69 in Structural Similarity Index (SSIM). Furthermore, DUAFF-Net consistently outperforms other state-of-the-art DL-based reconstruction models across multiple metrics, demonstrating its superior capability in preserving fine details and suppressing artifacts, thereby establishing comprehensive performance advantages for limited-view PAT reconstruction.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。