Scientific Reports (Nov 2024)
Composite bayesian optimization in function spaces ising NEON—Neural Epistemic Operator Networks
Abstract
Abstract Operator learning is a rising field of scientific computing where inputs or outputs of a machine learning model are functions defined in infinite-dimensional spaces. In this paper, we introduce Neon (Neural Epistemic Operator Networks), an architecture for generating predictions with uncertainty using a single operator network backbone, which presents orders of magnitude less trainable parameters than deep ensembles of comparable performance. We showcase the utility of this method for sequential decision-making by examining the problem of composite Bayesian Optimization (BO), where we aim to optimize a function $$f=g\circ h$$ f = g ∘ h , where $$h:X\rightarrow C(\mathscr {Y},{\mathbb {R}}^{d_s})$$ h : X → C ( Y , R d s ) is an unknown map which outputs elements of a function space, and $$g: C(\mathscr {Y},{\mathbb {R}}^{d_s})\rightarrow {\mathbb {R}}$$ g : C ( Y , R d s ) → R is a known and cheap-to-compute functional. By comparing our approach to other state-of-the-art methods on toy and real world scenarios, we demonstrate that Neon achieves state-of-the-art performance while requiring orders of magnitude less trainable parameters.
Keywords