Computation (Nov 2022)

Robust Variable Selection and Regularization in Quantile Regression Based on Adaptive-LASSO and Adaptive E-NET

  • Innocent Mudhombo,
  • Edmore Ranganai

DOI
https://doi.org/10.3390/computation10110203
Journal volume & issue
Vol. 10, no. 11
p. 203

Abstract

Read online

Although the variable selection and regularization procedures have been extensively considered in the literature for the quantile regression (QR) scenario via penalization, many such procedures fail to deal with data aberrations in the design space, namely, high leverage points (X-space outliers) and collinearity challenges simultaneously. Some high leverage points referred to as collinearity influential observations tend to adversely alter the eigenstructure of the design matrix by inducing or masking collinearity. Therefore, in the literature, it is recommended that the problems of collinearity and high leverage points should be dealt with simultaneously. In this article, we suggest adaptive LASSO and adaptive E-NET penalized QR (QR-ALASSO and QR-AE-NET) procedures where the weights are based on a QR estimator as remedies. We extend this methodology to their penalized weighted QR versions of WQR-LASSO, WQR-E-NET procedures we had suggested earlier. In the literature, adaptive weights are based on the RIDGE regression (RR) parameter estimator. Although the use of this estimator may be plausible at the ℓ1 estimator (QR at τ=0.5) for the symmetrical distribution, it may not be so at extreme quantile levels. Therefore, we use a QR-based estimator to derive adaptive weights. We carried out a comparative study of QR-LASSO, QR-E-NET, and the ones we suggest here, viz., QR-ALASSO, QR-AE-NET, weighted QRALASSO penalized and weighted QR adaptive AE-NET penalized (WQR-ALASSO and WQR-AE-NET) procedures. The simulation study results show that QR-ALASSO, QR-AE-NET, WQR-ALASSO and WQR-AE-NET generally outperform their nonadaptive counterparts. At predictor matrices with collinearity inducing points under normality, the QR-ALASSO and QR-AE-NET, respectively, outperform the non-adaptive procedures in the unweighted scenarios, as follows: in all 16 cases (100%) with respect to correctly selected (shrunk) zero coefficients; in 88% with respect to correctly fitted models; and in 81% with respect to prediction. In the weighted penalized WQR scenarios, WQR-ALASSO and WQR-AE-NET outperform their non-adaptive versions as follows: in 75% of the time with respect to both correctly fitted models and correctly shrunk zero coefficients and in 63% with respect to prediction. At predictor matrices with collinearity masking points under normality, the QR-ALASSO and QR-AE-NET, respectively, outperform the non-adaptive procedures in the unweighted scenarios as follows: in prediction, in 100% and 88% of the time; with respect to correctly fitted models in 100% and 50% (while in 50% equally); and with respect to correctly shrunk zero coefficients in 100% of the time. In the weighted scenario, WQR-ALASSO and WQR-AE-NET outperform their respective non-adaptive versions as follows; with respect to prediction, both in 63% of the time; with respect to correctly fitted models, in 88% of the time while with respect to correctly shrunk zero coefficients in 100% of the time. At predictor matrices with collinearity inducing points under the t-distribution, the QR-ALASSO and QR-AE-NET procedures outperform their respective non-adaptive procedures in the unweighted scenarios as follows: in prediction, in 100% and 75% of the time; with respect to correctly fitted models 88% of the time each; and with respect to correctly shrunk zero 88% and in 100% of the time. Additionally, the procedures WQR-ALASSO and WQR-AE-NET and their unweighted versions result in the former outperforming the latter in all respective cases with respect to prediction whilst there is no clear "winner" with respect to the other two measures. Overall, the WQR-ALASSO generally outperforms all other models with respect to all measures. At the predictor matrix with collinearity-masking points under the t-distribution, all adaptive versions outperformed their respective non-adaptive versions with respect to all metrics. In the unweighted scenarios, the QR-ALASSO and QR-AE-NET dominate their non-adaptive versions as follows: in prediction, in 63% and 75% of the time; with respect to correctly fitted models, in 100% and 38% (while in 62% equally); in 100% of the time with respect to correctly shrunk zero coefficients. In the weighted scenarios, all adaptive versions outperformed their non-adaptive versions as follows: 62% of the time in both respective cases with respect to prediction while it is vice-versa with respect to correctly fitted models and with respect to correctly shrunk zero coefficients. In the weighted scenarios, WQR-ALASSO and WQR-AE-NET dominate their respective non-adaptive versions as follows; with respect to correctly fitted models, in 62% of the time while with respect to correctly shrunk zero coefficients in 100% of the time in both cases. At the design matrix with both collinearity and high leverage points under the heavy-tailed distributions (t-distributions with d∈(1;6) degrees of freedom) scenarios, the dominance of the adaptive procedures over the non-adaptive ones is again evident. In the unweighted scenarios, the procedures QR-ALASSO and QR-AE-NET outperform their non-adaptive versions as follows; in prediction, in 75% and 62% of the time; with respect to correctly fitted models, they perform better in 100% and 88% of the time, while with respect to correctly shrunk zero coefficients, they outperform their non-adaptive ones 100% of the time in both cases. In the weighted scenarios, WQR-ALASSO and WQR-AE-NET dominate their non-adaptive versions as follows; with respect to prediction, in 100% of the time in both cases; and with respect to both correctly fitted models and correctly shrunk zero coefficients, they both do 88% of the time. Results from applications of the suggested procedures to real life data sets are more or less in line with the simulation studies results.

Keywords