Austrian Journal of Statistics (Apr 2016)
Dependent Samples in Empirical Estimation of Stochastic Programming Problems
Abstract
Stochastic optimization models are built with the assumption that the underlying probability measure is entirely known. This is not true in practice, however: empirical approximation or estimates are used instead. The question then arises if such inaccuracy does not perturb resulting solutions and optimal values. We measure the ”distance” between the probability distributions by suitable metrics on the space of probability measures. It is known that, under certain assumptions, the stability of the stochastic optimization model is assured with respect to the selected metric, and, moreover, the empirical estimate of the unknown distribution has suitable convergence properties, including a sufficient rate of convergence. In the case of Kolmogorov metric, the convergence rate is known if the random sample is independent and the probability measure is ”continuous”. In the case of Wasserstein (Mallows) metric, uniform distribution and independent random sample, the rate of convergence is the same in the case of the traditional uniform process and its limiting distribution is known; for other distributions, metrics, and the multidimensional case, the convergence properties and the rate of convergence have to be estimated e.g. by simulations. In our contribution, we show some numerical results for independent and dependent samples and make some backward interpretation of the results applied to stochastic programming.