Matematika i Matematičeskoe Modelirovanie (Jul 2017)

Comparison of Classical and Robust Estimates of Threshold Auto-regression Parameters

  • V. B. Goryainov

DOI
https://doi.org/10.24108/mathm.0317.0000072
Journal volume & issue
Vol. 0, no. 3
pp. 91 – 104

Abstract

Read online

The study object is the first-order threshold auto-regression model with a single zero-located threshold. The model describes a stochastic temporal series with discrete time by means of a piecewise linear equation consisting of two linear classical first-order autoregressive equations. One of these equations is used to calculate a running value of the temporal series. A control variable that determines the choice between these two equations is the sign of the previous value of the same series.The first-order threshold autoregressive model with a single threshold depends on two real parameters that coincide with the coefficients of the piecewise linear threshold equation. These parameters are assumed to be unknown. The paper studies an estimate of the least squares, an estimate the least modules, and the M-estimates of these parameters. The aim of the paper is a comparative study of the accuracy of these estimates for the main probabilistic distributions of the updating process of the threshold autoregressive equation. These probability distributions were normal, contaminated normal, logistic, double-exponential distributions, a Student's distribution with different number of degrees of freedom, and a Cauchy distribution.As a measure of the accuracy of each estimate, was chosen its variance to measure the scattering of the estimate around the estimated parameter. An estimate with smaller variance made from the two estimates was considered to be the best. The variance was estimated by computer simulation. To estimate the smallest modules an iterative weighted least-squares method was used and the M-estimates were done by the method of a deformable polyhedron (the Nelder-Mead method). To calculate the least squares estimate, an explicit analytic expression was used.It turned out that the estimation of least squares is best only with the normal distribution of the updating process. For the logistic distribution and the Student's distribution with the large number of degrees of freedom, the M-estimate with the Huber rho-function exceeds the least squares estimate in the case of both distributions.For the Laplace distribution, the least squares estimate is the worst, and the least modulus estimate is the best among all estimates.For the Cauchy distribution, the least-squares estimate has incomparably low efficiency with respect to the remaining estimates.With decreasing number of degrees of freedom in the Student's distribution, the least squares estimate at first loses only the M-estimate with the Huber rho-function, then both M-estimates, and then the least moduli estimate.If the updating process has a contaminated normal distribution, then the M-estimate is a little bit lower than the least squares estimate only in case there is absolutely no contaminants.With increasing contamination share and level, relative effectiveness of the M-estimate with respect to the estimation of least squares grows, becoming above unit for typical contamination in practice.

Keywords