Abstract
Sweetpotatoes grading, particularly for defects, is a labor-intensive operation and often constrained by inconsistencies in manual inspection. To facilitate the development of automated quality grading systems, this article presents a unique, unified machine vision dataset specifically for multi-view quality inspection and grading of sweetpotatoes. The dataset consolidates data derived from two independent experimental campaigns to enhance diversity in sample characteristics and imaging conditions. It comprises the data of 390 samples divided into two subsets; Subset A consists of 123 fresh-market sweetpotatoes sourced from grocery stores and imaged under ambient indoor lighting at a resolution of 1920 × 1080 pixels, and Subset B includes 267 sweetpotatoes of two varieties harvested from a research station and imaged in an enclosed chamber under controlled illumination at a resolution of 1280 × 720 pixels. In both subsets, sweetpotato samples were rotated on a custom-designed roller conveyor for multi-view imaging, and RGB (red-green-blue) frames were extracted from recorded video streams and labeled for individual sweetpotato instances. The curated dataset contains a total of 1400 images (Subset A: 232, Subset B: 1168) with 3700 annotated instances, along with 39 raw video recordings as well as physical measurements. Each instance is labeled with a polygon segmentation mask and assigned a quality grade (Grade 1, Grade 2, or Grade 3) based on surface defect severity. This dataset represents the first publicly available machine vision dataset dedicated to automated sweetpotato grading. It provides a diverse and valuable resource for training and evaluating computer vision algorithms and models for instance segmentation and surface-quality assessment of sweetpotatoes and beyond.