Advanced Science (Nov 2024)

Semantic–Electromagnetic Inversion With Pretrained Multimodal Generative Model

  • Yanjin Chen,
  • Hongrui Zhang,
  • Jie Ma,
  • Tie Jun Cui,
  • Philipp delHougne,
  • Lianlin Li

DOI
https://doi.org/10.1002/advs.202406793
Journal volume & issue
Vol. 11, no. 42
pp. n/a – n/a

Abstract

Read online

Abstract Across diverse domains of science and technology, electromagnetic (EM) inversion problems benefit from the ability to account for multimodal prior information to regularize their inherent ill‐posedness. Indeed, besides priors that are formulated mathematically or learned from quantitative data, valuable prior information may be available in the form of text or images. Besides handling semantic multimodality, it is furthermore important to minimize the cost of adapting to a new physical measurement operator and to limit the requirements for costly labeled data. Here, these challenges are tackled with a frugal and multimodal semantic–EM inversion technique. The key ingredient is a multimodal generator of reconstruction results that can be pretrained, being agnostic to the physical measurement operator. The generator is fed by a multimodal foundation model encoding the multimodal semantic prior and a physical adapter encoding the measured data. For a new physical setting, only the lightweight physical adapter is retrained. The authors’ architecture also enables a flexible iterative step‐by‐step solution to the inverse problem where each step can be semantically controlled. The feasibility and benefits of this methodology are demonstrated for three EM inverse problems: a canonical two‐dimensional inverse‐scattering problem in numerics, as well as three‐dimensional and four‐dimensional compressive microwave meta‐imaging experiments.

Keywords