Autostereoscopic 3D Measurement Based on Adaptive Focus Volume Aggregation

基于自适应聚焦体积聚合的自动立体3D测量

阅读:1

Abstract

Autostereoscopic three-dimensional measuring systems are a kind of portable and fast precision metrology instrument. The systems are based on integral imaging that makes use of a micro-lens array before an image sensor to observe measured parts from multiple perspectives. Since autostereoscopic measuring systems can obtain longitudinal and lateral information within single snapshots rapidly, the three-dimensional profiles of the measured parts can be reconstructed by shape from focus. In general, the reconstruction process consists of data acquisition, pre-processing, digital refocusing, focus measures, and depth estimation. The accuracy of depth estimation is determined by the focus volume generated by focus measure operators which could be sensitive to the noise during digital refocusing. Without prior knowledge and surface information, directly estimated depth maps usually contain severe noise and incorrect representation of continuous surfaces. To eliminate the effects of refocusing noise and take advantage of traditional focus measure methods with robustness, an adaptive focus volume aggregation method based on convolutional neural networks is presented to optimize the focus volume for more accurate depth estimation. Since a large amount of data and ground truth are costly to acquire for model convergence, backpropagation is performed for every sample under an unsupervised strategy. The training strategy makes use of a smoothness constraint and an identical distribution constraint that restricts the difference between the distribution of the network output and the distribution of ideal depth estimation. Experimental results show that the proposed adaptive aggregation method significantly reduces the noise during depth estimation and retains more accurate surface profiles. As a result, the autostereoscopic measuring system can directly recover surface profiles from raw data without any prior information.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。