Scientific Reports (Mar 2023)

Exact Gaussian processes for massive datasets via non-stationary sparsity-discovering kernels

  • Marcus M. Noack,
  • Harinarayan Krishnan,
  • Mark D. Risser,
  • Kristofer G. Reyes

DOI
https://doi.org/10.1038/s41598-023-30062-8
Journal volume & issue
Vol. 13, no. 1
pp. 1 – 15

Abstract

Read online

Abstract A Gaussian Process (GP) is a prominent mathematical framework for stochastic function approximation in science and engineering applications. Its success is largely attributed to the GP’s analytical tractability, robustness, and natural inclusion of uncertainty quantification. Unfortunately, the use of exact GPs is prohibitively expensive for large datasets due to their unfavorable numerical complexity of $$O(N^3)$$ O ( N 3 ) in computation and $$O(N^2)$$ O ( N 2 ) in storage. All existing methods addressing this issue utilize some form of approximation—usually considering subsets of the full dataset or finding representative pseudo-points that render the covariance matrix well-structured and sparse. These approximate methods can lead to inaccuracies in function approximations and often limit the user’s flexibility in designing expressive kernels. Instead of inducing sparsity via data-point geometry and structure, we propose to take advantage of naturally-occurring sparsity by allowing the kernel to discover—instead of induce—sparse structure. The premise of this paper is that the data sets and physical processes modeled by GPs often exhibit natural or implicit sparsities, but commonly-used kernels do not allow us to exploit such sparsity. The core concept of exact, and at the same time sparse GPs relies on kernel definitions that provide enough flexibility to learn and encode not only non-zero but also zero covariances. This principle of ultra-flexible, compactly-supported, and non-stationary kernels, combined with HPC and constrained optimization, lets us scale exact GPs well beyond 5 million data points.