IEEE Access (Jan 2018)
On the Practical Art of State Definitions for Markov Decision Process Construction
Abstract
Many problems faced by decision makers today involve the management of large scale, complex systems that can be modeled as state-based control problems, specifically discrete Markov decision process (MDP). Typical examples include transportation systems, defense systems, healthcare networks, financial organizations, and general infrastructure problems. In all of these problems, decision makers have difficulty in forecasting the state of their system in the future and capturing the dynamics of the states over time. In this paper, we discuss, via numerous examples, practical experiences in trying to build such models. Much of the literature discusses theoretical issues of solution convergence and algorithm performance; unfortunately, much of this research does not help with the practical business of building an actual MDP model. Thus, numerous books begin with statement of the nature: "given the state space S. . .." A critical question to the practitioner is the creation of this state space "S." We focus on this first step in the MDP modeling process, an often neglected and difficult step, and we discuss the practical implications and issues associated with the state definition, illustrating these issues with numerous examples. This paper is not meant to be a survey of "state-based" applications or MDP applications, but an overview of experiences building many of these models in diverse applications.
Keywords