Data in Brief (Feb 2024)

AmodalAppleSize_RGB-D dataset: RGB-D images of apple trees annotated with modal and amodal segmentation masks for fruit detection, visibility and size estimation

  • Jordi Gené-Mola,
  • Mar Ferrer-Ferrer,
  • Jochen Hemming,
  • Pieter van Dalfsen,
  • Dirk de Hoog,
  • Ricardo Sanz-Cortiella,
  • Joan R. Rosell-Polo,
  • Josep-Ramon Morros,
  • Verónica Vilaplana,
  • Javier Ruiz-Hidalgo,
  • Eduard Gregorio

Journal volume & issue
Vol. 52
p. 110000

Abstract

Read online

The present dataset comprises a collection of RGB-D apple tree images that can be used to train and test computer vision-based fruit detection and sizing methods. This dataset encompasses two distinct sets of data obtained from a Fuji and an Elstar apple orchards. The Fuji apple orchard sub-set consists of 3925 RGB-D images containing a total of 15,335 apples annotated with both modal and amodal apple segmentation masks. Modal masks denote the visible portions of the apples, whereas amodal masks encompass both visible and occluded apple regions. Notably, this dataset is the first public resource to incorporate on-tree fruit amodal masks. This pioneering inclusion addresses a critical gap in existing datasets, enabling the development of robust automatic fruit sizing methods and accurate fruit visibility estimation, particularly in the presence of partial occlusions. Besides the fruit segmentation masks, the dataset also includes the fruit size (calliper) ground truth for each annotated apple. The second sub-set comprises 2731 RGB-D images capturing five Elstar apple trees at four distinct growth stages. This sub-set includes mean diameter information for each tree at every growth stage and serves as a valuable resource for evaluating fruit sizing methods trained with the first sub-set. The present data was employed in the research paper titled “Looking behind occlusions: a study on amodal segmentation for robust on-tree apple fruit size estimation” [1].

Keywords