IEEE Access (Jan 2022)
Time Series Prediction via Similarity Search: Exploring Invariances, Distance Measures and Ensemble Functions
Abstract
The rapid advance of scientific research in data mining has led to the adaptation of conventional pattern extraction methods to the context of time series analysis. The forecasting (or prediction) task has been supported mainly by regression algorithms based on artificial neural networks, support vector machines, and ${k}$ -Nearest Neighbors ( ${k}$ NN). However, some studies provided empirical evidence that similarity-based methods, i.e. variations of ${k}$ NN, constitute a promising approach compared with more complex predictive models from both machine learning and statistics. Although the scientific community has made great strides in increasing the visibility of these easy-to-fit and impressively accurate algorithms, previous work has failed to recognize the right invariances needed for this task. We propose a novel extension of ${k}$ NN, namely ${k}$ NN - Time Series Prediction with Invariances ( ${k}$ NN-TSPI), that differs from the literature by combining techniques to obtain amplitude and offset invariance, complexity invariance, and treatment of trivial matches. Our predictor enables more meaningful matches between reference queries and data subsequences. From a comprehensive evaluation with real-world datasets, we demonstrate that ${k}$ NN-TSPI is a competitive algorithm against two conventional similarity-based approaches and, most importantly, against 11 popular predictors. To assist future research and provide a better understanding of similarity-based method behaviors, we also explore different settings of ${k}$ NN-TSPI regarding invariances to distortions in time series, distance measures, complexity-invariant distances, and ensemble functions. Results show that ${k}$ NN-TSPI stands out for its robustness and stability both concerning the parameter $k$ and the accuracy of the projection horizon trends.
Keywords