Physical Review Research (Sep 2022)
Selecting simple, transferable models with the supremum principle
Abstract
We consider how mathematical models enable predictions for conditions that are qualitatively different from the training data. We propose techniques based on information topology to find models that can apply their learning in regimes for which there is no data. The first step is to use the manifold boundary approximation method to construct simple, reduced models of target phenomena in a data-driven way. We consider the set of all such reduced models and use the topological relationships among them to reason about model selection for new, unobserved phenomena. Given minimal models for several target behaviors, we introduce the supremum principle as a criterion for selecting a new, transferable model. The supremal model, i.e., the least upper bound, is the simplest model that reduces to each of the target behaviors. We illustrate how to discover supremal models with several examples; in each case, the supremal model unifies causal mechanisms to transfer successfully to new target domains. We use these examples to motivate a general algorithm that has formal connections to theories of analogical reasoning in cognitive psychology.