IEEE Access (Jan 2018)

On the Practical Art of State Definitions for Markov Decision Process Construction

  • William T. Scherer,
  • Stephen Adams,
  • Peter A. Beling

DOI
https://doi.org/10.1109/ACCESS.2018.2819940
Journal volume & issue
Vol. 6
pp. 21115 – 21128

Abstract

Read online

Many problems faced by decision makers today involve the management of large scale, complex systems that can be modeled as state-based control problems, specifically discrete Markov decision process (MDP). Typical examples include transportation systems, defense systems, healthcare networks, financial organizations, and general infrastructure problems. In all of these problems, decision makers have difficulty in forecasting the state of their system in the future and capturing the dynamics of the states over time. In this paper, we discuss, via numerous examples, practical experiences in trying to build such models. Much of the literature discusses theoretical issues of solution convergence and algorithm performance; unfortunately, much of this research does not help with the practical business of building an actual MDP model. Thus, numerous books begin with statement of the nature: "given the state space S. . .." A critical question to the practitioner is the creation of this state space "S." We focus on this first step in the MDP modeling process, an often neglected and difficult step, and we discuss the practical implications and issues associated with the state definition, illustrating these issues with numerous examples. This paper is not meant to be a survey of "state-based" applications or MDP applications, but an overview of experiences building many of these models in diverse applications.

Keywords