Scientific Reports (Jun 2024)

Bellybutton: accessible and customizable deep-learning image segmentation

  • Sam Dillavou,
  • Jesse M. Hanlan,
  • Anthony T. Chieco,
  • Hongyi Xiao,
  • Sage Fulco,
  • Kevin T. Turner,
  • Douglas J. Durian

DOI
https://doi.org/10.1038/s41598-024-63906-y
Journal volume & issue
Vol. 14, no. 1
pp. 1 – 7

Abstract

Read online

Abstract The conversion of raw images into quantifiable data can be a major hurdle and time-sink in experimental research, and typically involves identifying region(s) of interest, a process known as segmentation. Machine learning tools for image segmentation are often specific to a set of tasks, such as tracking cells, or require substantial compute or coding knowledge to train and use. Here we introduce an easy-to-use (no coding required), image segmentation method, using a 15-layer convolutional neural network that can be trained on a laptop: Bellybutton. The algorithm trains on user-provided segmentation of example images, but, as we show, just one or even a sub-selection of one training image can be sufficient in some cases. We detail the machine learning method and give three use cases where Bellybutton correctly segments images despite substantial lighting, shape, size, focus, and/or structure variation across the regions(s) of interest. Instructions for easy download and use, with further details and the datasets used in this paper are available at pypi.org/project/Bellybuttonseg .