Biological Imaging (Jan 2024)

Exploring self-supervised learning biases for microscopy image representation

  • Ihab Bendidi,
  • Adrien Bardes,
  • Ethan Cohen,
  • Alexis Lamiable,
  • Guillaume Bollot,
  • Auguste Genovesio

DOI
https://doi.org/10.1017/S2633903X2400014X
Journal volume & issue
Vol. 4

Abstract

Read online

Self-supervised representation learning (SSRL) in computer vision relies heavily on simple image transformations such as random rotation, crops, or illumination to learn meaningful and invariant features. Despite acknowledged importance, there is a lack of comprehensive exploration of the impact of transformation choice in the literature. Our study delves into this relationship, specifically focusing on microscopy imaging with subtle cell phenotype differences. We reveal that transformation design acts as a form of either unwanted or beneficial supervision, impacting feature clustering and representation relevance. Importantly, these effects vary based on class labels in a supervised dataset. In microscopy images, transformation design significantly influences the representation, introducing imperceptible yet strong biases. We demonstrate that strategic transformation selection, based on desired feature invariance, drastically improves classification performance and representation quality, even with limited training samples.

Keywords