Journal of Imaging (Mar 2024)

Multi-Modal Convolutional Parameterisation Network for Guided Image Inverse Problems

  • Mikolaj Czerkawski,
  • Priti Upadhyay,
  • Christopher Davison,
  • Robert Atkinson,
  • Craig Michie,
  • Ivan Andonovic,
  • Malcolm Macdonald,
  • Javier Cardona,
  • Christos Tachtatzis

DOI
https://doi.org/10.3390/jimaging10030069
Journal volume & issue
Vol. 10, no. 3
p. 69

Abstract

Read online

There are several image inverse tasks, such as inpainting or super-resolution, which can be solved using deep internal learning, a paradigm that involves employing deep neural networks to find a solution by learning from the sample itself rather than a dataset. For example, Deep Image Prior is a technique based on fitting a convolutional neural network to output the known parts of the image (such as non-inpainted regions or a low-resolution version of the image). However, this approach is not well adjusted for samples composed of multiple modalities. In some domains, such as satellite image processing, accommodating multi-modal representations could be beneficial or even essential. In this work, Multi-Modal Convolutional Parameterisation Network (MCPN) is proposed, where a convolutional neural network approximates shared information between multiple modes by combining a core shared network with modality-specific head networks. The results demonstrate that these approaches can significantly outperform the single-mode adoption of a convolutional parameterisation network on guided image inverse problems of inpainting and super-resolution.

Keywords