3DPotatoTwin: a paired potato tuber dataset for 3D multi-sensory fusion

3DPotatoTwin:用于三维多感官融合的成对马铃薯块茎数据集

阅读:1

Abstract

Accurate 3D phenotyping of agricultural produce remains challenging due to the trade-off between reconstruction quality and acquisition throughput in existing sensing technologies. While RGB-D cameras enable high-throughput scanning in operational settings like harvesting conveyors, they produce incomplete, low-quality 3D models. Conversely, close-range Structure-from-Motion (SfM) produces high-quality reconstructions but is not suitable for high-throughput field application. This study bridges this gap through 3DPotatoTwin, a paired dataset containing 339 tuber samples across three cultivars collected in Hokkaido, Japan. Our dataset uniquely combines: (1) conveyor-acquired RGB-D point clouds, (2) ground measurement, (3) SfM reconstructions under indoor controlled environment, and (4) aligned model pairs with transformation matrices. The multi-sensory alignment employs an semi-supervised pin-guided pipeline incorporating single-pin extraction and referencing, cross-strip matching, and binary-color-enhanced ICP, achieving 0.59 ​± ​0.11 ​mm registration accuracy. Beyond serving as a benchmark for 3D phenotyping algorithms, the dataset enables training of 3D completion networks to reconstruct high-quality 3D models from partial RGB-D point clouds. Meanwhile, the proposed semi-automated annotation pipeline has the potential to accelerate 3D dataset generation for similar studies. The presented methodology demonstrates broader applicability for multi-sensor data fusion across crop phenotyping applications. The dataset and pipeline source code are publicly available at HuggingFace and GitHub, respectively.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。