Scientific Data (Jul 2024)

An UltraMNIST classification benchmark to train CNNs for very large images

  • Deepak K. Gupta,
  • Udbhav Bamba,
  • Abhishek Thakur,
  • Akash Gupta,
  • Rohit Agarwal,
  • Suraj Sharan,
  • Ertugul Demir,
  • Krishna Agarwal,
  • Dilip K. Prasad

DOI
https://doi.org/10.1038/s41597-024-03587-4
Journal volume & issue
Vol. 11, no. 1
pp. 1 – 14

Abstract

Read online

Abstract Current convolutional neural networks (CNNs) are not designed for large scientific images with rich multi-scale features, such as in satellite and microscopy domain. A new phase of development of CNNs especially designed for large images is awaited. However, application-independent high-quality and challenging datasets needed for such development are still missing. We present the ‘UltraMNIST dataset’ and associated benchmarks for this new research problem of ‘training CNNs for large images’. The dataset is simple, representative of wide-ranging challenges in scientific data, and easily customizable for different levels of complexity, smallest and largest features, and sizes of images. Two variants of the problem are discussed: standard version that facilitates the development of novel CNN methods for effective use of the best available GPU resources and the budget-aware version to promote the development of methods that work under constrained GPU memory. Several baselines are presented and the effect of reduced resolution is studied. The presented benchmark dataset and baselines will hopefully trigger the development of new CNN methods for large scientific images.