Shape-based Three-dimensional Body Composition Extrapolation Using Multimodality Registration

基于形状的多模态配准三维人体成分外推

阅读:2

Abstract

The ubiquity of commodity-level optical scan devices and reconstruction technologies has enabled the general public to monitor their body shape related health status anywhere, anytime, without assistance from professionals. Commercial optical body scan systems extract anthropometries from the virtual body shapes, from which body compositions are estimated. However, in most cases, these estimations are limited to the quantity of fat in the whole body instead of a fine-granularity voxel-level fat distribution estimation. To bridge the gap between the 3D body shape and fine-granularity voxel-level fat distribution, we present an innovative shape-based voxel-level body composition extrapolation method using multimodality registration. First, we optimize shape compliance between a generic body composition template and the 3D body shape. Then, we optimize data compliance between the shape-optimized body composition template and a body composition reference from the DXA pixel-level body composition assessment. We evaluate the performance of our method with different subjects. On average, the Root Mean Square Error (RMSE) of our body composition extrapolation is 1.19%, and the R-squared value between our estimation and the ground truth is 0.985. The experimental result shows that our algorithm can robustly estimate voxel-level body composition for 3D body shapes with a high degree of accuracy.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。