IEEE Access (Jan 2020)
Principal Tensor Embedding for Unsupervised Tensor Learning
Abstract
Tensors and multiway analysis aim to explore the relationships between the variables used to represent the data and find a summarization of the data with models of reduced dimensionality. However, although in this context a great attention was devoted to this problem, dimension reduction of high-order tensors remains a challenge. The aim of this article is to provide a nonlinear dimensionality reduction approach, named principal tensor embedding (PTE), for unsupervised tensor learning, that is able to derive an explicit nonlinear model of data. As in the standard manifold learning (ML) technique, it assumes multidimensional data lie close to a low-dimensional manifold embedded in a high-dimensional space. On the basis of this assumption a local parametrization of data that accurately captures its local geometry is derived. From this mathematical framework a nonlinear stochastic model of data that depends on a reduced set of latent variables is obtained. In this way the initial problem of unsupervised learning is reduced to the regression of a nonlinear input-output function, i.e. a supervised learning problem. Extensive experiments on several tensor datasets demonstrate that the proposed ML approach gives competitive performance when compared with other techniques used for data reconstruction and classification.
Keywords