Pelvic floor muscle contraction automatic evaluation algorithm for pelvic floor muscle training biofeedback using self-performed ultrasound

基于自我超声检查的盆底肌收缩自动评估算法,用于盆底肌训练生物反馈

阅读:1

Abstract

INTRODUCTION: Non-invasive biofeedback of pelvic floor muscle training (PFMT) is required for continuous training in home care. Therefore, we considered self-performed ultrasound (US) in adult women with a handheld US device applied to the bladder. However, US images are difficult to read and require assistance when using US at home. In this study, we aimed to develop an algorithm for the automatic evaluation of pelvic floor muscle (PFM) contraction using self-performed bladder US videos to verify whether it is possible to automatically determine PFM contraction from US videos. METHODS: Women aged ≥ 20 years were recruited from the outpatient Urology and Gynecology departments of a general hospital or through snowball sampling. The researcher supported the participants in their self-performed bladder US and videos were obtained several times during PFMT. The US videos obtained were used to develop an automatic evaluation algorithm. Supervised machine learning was then performed using expert PFM contraction classifications as ground truth data. Time-series features were generated from the x- and y-coordinate values of the bladder area including the bladder base. The final model was evaluated for accuracy, area under the curve (AUC), recall, precision, and F1. The contribution of each feature variable to the classification ability of the model was estimated. RESULTS: The 1144 videos obtained from 56 participants were analyzed. We split the data into training and test sets with 7894 time series features. A light gradient boosting machine model (Light GBM) was selected, and the final model resulted in an accuracy of 0.73, AUC = 0.91, recall = 0.66, precision = 0.73, and F1 = 0.73. Movement of the y-coordinate of the bladder base was shown as the most important. CONCLUSION: This study showed that automated classification of PFM contraction from self-performed US videos is possible with high accuracy.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。