Abstract
The integration of artificial intelligence (AI) with ultrasonic biosensing presents a transformative opportunity for enhancing diagnostic accuracy in agricultural and biomedical applications. This study develops a data-driven deep learning model to address the challenge of acoustic artifacts in B-mode ultrasound imaging, specifically for sow pregnancy diagnosis. We designed a biosensing system centered on a mechanical sector-scanning ultrasound probe (5.0 MHz) as the core biosensor for data acquisition. To overcome the limitations of traditional filtering methods, we introduced a lightweight Deep Neural Network (DNN) based on the YOLOv8 architecture, which was data-driven and trained on a purpose-built dataset of sow pregnancy ultrasound images featuring typical artifacts like reverberation and acoustic shadowing. The AI model functions as an intelligent detection layer that identifies and masks artifact regions while simultaneously detecting and annotating key anatomical features. This combined detection-masking approach enables artifact-aware visualization enhancement, where artifact regions are suppressed and diagnostic structures are highlighted for improved clinical interpretation. Experimental results demonstrate the superiority of our AI-enhanced approach, achieving a mean Intersection over Union (IOU) of 0.89, a Peak Signal-to-Noise Ratio (PSNR) of 34.2 dB, a Structural Similarity Index (SSIM) of 0.92, and clinically tested early gestation accuracy of 98.1%, significantly outperforming traditional methods (IoU: 0.65, PSNR: 28.5 dB, SSIM: 0.72, accuracy: 76.4). Crucially, the system maintains a single-image processing time of 22 ms, fulfilling the requirement for real-time clinical diagnosis. This research not only validates a robust AI-powered ultrasonic biosensing system for improving reproductive management in livestock but also establishes a reproducible, scalable framework for intelligent signal enhancement in broader biosensor applications.