Data in Brief (Jun 2020)

Fuji-SfM dataset: A collection of annotated images and point clouds for Fuji apple detection and location using structure-from-motion photogrammetry

  • Jordi Gené-Mola,
  • Ricardo Sanz-Cortiella,
  • Joan R. Rosell-Polo,
  • Josep-Ramon Morros,
  • Javier Ruiz-Hidalgo,
  • Verónica Vilaplana,
  • Eduard Gregorio

Journal volume & issue
Vol. 30
p. 105591

Abstract

Read online

The present dataset contains colour images acquired in a commercial Fuji apple orchard (Malus domestica Borkh. cv. Fuji) to reconstruct the 3D model of 11 trees by using structure-from-motion (SfM) photogrammetry. The data provided in this article is related to the research article entitled “Fruit detection and 3D location using instance segmentation neural networks and structure-from-motion photogrammetry” [1]. The Fuji-SfM dataset includes: (1) a set of 288 colour images and the corresponding annotations (apples segmentation masks) for training instance segmentation neural networks such as Mask-RCNN; (2) a set of 582 images defining a motion sequence of the scene which was used to generate the 3D model of 11 Fuji apple trees containing 1455 apples by using SfM; (3) the 3D point cloud of the scanned scene with the corresponding apple positions ground truth in global coordinates. With that, this is the first dataset for fruit detection containing images acquired in a motion sequence to build the 3D model of the scanned trees with SfM and including the corresponding 2D and 3D apple location annotations. This data allows the development, training, and test of fruit detection algorithms either based on RGB images, on coloured point clouds or on the combination of both types of data.

Keywords