Abstract
Urban gardens promote urban biodiversity by providing diverse ground covers that support habitat provision, pollination, pest control, and soil functions. However, lacking high spatial resolution images, their spatial heterogeneity remains poorly mapped, limiting our understanding of how these features support ecosystem services. This study presents a high-resolution dataset derived from unmanned aerial vehicle (UAV) RGB imagery for the semantic segmentation of diverse ground covers in urban community gardens. The dataset consists of 2,521 images processed into 24 orthomosaics, acquired in 2021-2022 at five garden locations in Munich, Germany. Each image (18.9-146.4 M px; 3.2-7.9 mm resolution) is manually annotated into eight ground-cover classes (grass, herb, litter, soil, stone, straw, wood, and woodchip). We evaluated deep-learning segmentation models, including UNet and DeepLabV3+. The DeepLabV3+ (overall accuracy = 93.2% and Intersection over Union = 69.4) achieved high classification accuracy in distinguishing these complex classes. This dataset is intended to support research on urban biodiversity, habitat modelling, garden management, remote sensing research, and can be integrated with other fine-scale datasets to advance sustainable urban green planning.