Personalised pose estimation from single-plane moving fluoroscope images using deep convolutional neural networks

利用深度卷积神经网络从单平面移动透视图像中进行个性化姿态估计

阅读:2

Abstract

Measuring joint kinematics is a key requirement for a plethora of biomechanical research and applications. While x-ray based systems avoid the soft-tissue artefacts arising in skin-based measurement systems, extracting the object's pose (translation and rotation) from the x-ray images is a time-consuming and expensive task. Based on about 106'000 annotated images of knee implants, collected over the last decade with our moving fluoroscope during activities of daily living, we trained a deep-learning model to automatically estimate the 6D poses for the femoral and tibial implant components. By pretraining a single stage of our architecture using renderings of the implant geometries, our approach offers personalised predictions of the implant poses, even for unseen subjects. Our approach predicted the pose of both implant components better than about 0.75 mm (in-plane translation), 25 mm (out-of-plane translation), and 2° (all Euler-angle rotations) over 50% of the test samples. When evaluating over 90% of test samples, which included heavy occlusions and low contrast images, translation performance was better than 1.5 mm (in-plane) and 30 mm (out-of-plane), while rotations were predicted better than 3-4°. Importantly, this approach now allows for pose estimation in a fully automated manner.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。