Frontiers in Neurorobotics (Nov 2012)

What is value – accumulated reward or evidence?

  • Karl eFriston,
  • Rick eAdams,
  • Read eMontague

DOI
https://doi.org/10.3389/fnbot.2012.00011
Journal volume & issue
Vol. 6

Abstract

Read online

Why are you reading this abstract? In some sense, your answer will cast the exercise as valuable – but what is value? In what follows, we suggest that value is evidence or, more exactly, log Bayesian evidence. This implies that a sufficient explanation for valuable behaviour is the accumulation of evidence for internal models of our world. This contrasts with normative models of optimal control and reinforcement learning, which assume the existence of a value function that explains behaviour, where (somewhat tautologically) behaviour maximises value. In this paper, we consider an alternative formulation – active inference – that replaces policies in normative models with prior beliefs about (future) states agents should occupy. This enables optimal behaviour to be cast purely in terms of inference: where agents sample their sensorium to maximise the evidence for their generative model of hidden states in the world – and minimise their uncertainty about those states. Crucially, this formulation resolves the tautology inherent in normative models and allows one to consider how prior beliefs are themselves optimised in a hierarchical setting. We illustrate these points by showing that any optimal policy can be specified with prior beliefs in the context of Bayesian inference. We then show how these prior beliefs are themselves prescribed by an imperative to minimise uncertainty. This formulation explains the saccadic eye movements required to read this text and defines the value of the visual sensations you are soliciting.

Keywords