Abstract
This study aims to develop and evaluate a deep learning (DL) model for classifying tendon gliding sounds recorded using digital stethoscopes (Nexteto, ShareMedical, Japan, Nagoya). Specifically, we investigate whether differences in tendon excursion and biomechanics produce distinct acoustic signatures that can be identified through spectrogram analysis and machine learning (ML). Tendon disorders often present characteristic tactile and acoustic features, such as clicking or resistance during movement. In recent years, artificial intelligence (AI) and ML have achieved significant success in medical diagnostics, particularly through pattern recognition in medical imaging. Leveraging these advancements, we recorded tendon gliding sounds from the thumb and index finger in healthy volunteers and transformed these recordings into spectrograms for analysis. Although the sample size was small, we performed classification based on the frequency characteristics of the spectrograms using DL models, achieving high classification accuracy. These findings indicate that AI-based models can accurately distinguish between different tendon sounds and strongly suggest their potential as a non-invasive diagnostic tool for musculoskeletal disorders. This approach could offer a non-invasive diagnostic tool for detecting tendon disorders such as tenosynovitis or carpal tunnel syndrome, potentially aiding early diagnosis and treatment planning.