Scientific Reports (Jun 2023)

In-orbit demonstration of a re-trainable machine learning payload for processing optical imagery

  • Gonzalo Mateo-Garcia,
  • Josh Veitch-Michaelis,
  • Cormac Purcell,
  • Nicolas Longepe,
  • Simon Reid,
  • Alice Anlind,
  • Fredrik Bruhn,
  • James Parr,
  • Pierre Philippe Mathieu

DOI
https://doi.org/10.1038/s41598-023-34436-w
Journal volume & issue
Vol. 13, no. 1
pp. 1 – 14

Abstract

Read online

Abstract Cognitive cloud computing in space (3CS) describes a new frontier of space innovation powered by Artificial Intelligence, enabling an explosion of new applications in observing our planet and enabling deep space exploration. In this framework, machine learning (ML) payloads—isolated software capable of extracting high level information from onboard sensors—are key to accomplish this vision. In this work we demonstrate, in a satellite deployed in orbit, a ML payload called ‘WorldFloods’ that is able to send compressed flood maps from sensed images. In particular, we perform a set of experiments to: (1) compare different segmentation models on different processing variables critical for onboard deployment, (2) show that we can produce, onboard, vectorised polygons delineating the detected flood water from a full Sentinel-2 tile, (3) retrain the model with few images of the onboard sensor downlinked to Earth and (4) demonstrate that this new model can be uplinked to the satellite and run on new images acquired by its camera. Overall our work demonstrates that ML-based models deployed in orbit can be updated if new information is available, paving the way for agile integration of onboard and onground processing and “on the fly” continuous learning.