Abstract
The ubiquity of commodity-level optical scan devices and reconstruction technologies has enabled the general public to monitor their body shape related health status anywhere, anytime, without assistance from professionals. Commercial optical body scan systems extract anthropometries from the virtual body shapes, from which body compositions are estimated. However, in most cases, these estimations are limited to the quantity of fat in the whole body instead of a fine-granularity voxel-level fat distribution estimation. To bridge the gap between the 3D body shape and fine-granularity voxel-level fat distribution, we present an innovative shape-based voxel-level body composition extrapolation method using multimodality registration. First, we optimize shape compliance between a generic body composition template and the 3D body shape. Then, we optimize data compliance between the shape-optimized body composition template and a body composition reference from the DXA pixel-level body composition assessment. We evaluate the performance of our method with different subjects. On average, the Root Mean Square Error (RMSE) of our body composition extrapolation is 1.19%, and the R-squared value between our estimation and the ground truth is 0.985. The experimental result shows that our algorithm can robustly estimate voxel-level body composition for 3D body shapes with a high degree of accuracy.