IEEE Access (Jan 2023)

Meta-Feature Fusion for Few-Shot Time Series Classification

  • Seo-Hyeong Park,
  • Nur Suriza Syazwany,
  • Sang-Chul Lee

DOI
https://doi.org/10.1109/ACCESS.2023.3270493
Journal volume & issue
Vol. 11
pp. 41400 – 41414

Abstract

Read online

Deep learning has been widely adopted for end-to-end time-series classification (TSC). However, the effectiveness of deep learning heavily relies on large-scale data. Thus, deep learning is prone to overfit when only few labeled samples are available. Few-shot learning (FSL) aims to address this issue by learning to generalize to new tasks with few training samples (e.g., one or five samples per class). FSL considers learning good representations crucial to classify accurately using discriminative features. In this study, we propose a framework for few-shot TSC that encodes a time series as different types of images (i.e., Recurrence plot, Markov transition field, and Gramian angular summation/difference field) and train these images to the model using the FSL procedure. Different features of each image enable the model to learn rich information. In addition, we propose temporal-context attention (TCA) and meta-feature fusion (MFF) to maximize the representation ability of these images. TCA incorporates global context of the feature map and highlights pixels having informative relevance with other pixels. After extracting features, MFF refines each feature using different kernels generated based on cross-modality features and fuses the refined features. Finally, the test samples are classified to the nearest class prototype in the embedding space. All experiments are conducted on various N-way K-shot problems. Our framework outperforms state-of-the-art models on 28 standard datasets in the UCR (University of California, Riverside) archive, which is a widely used benchmark dataset in time series classification, from 0.34% up to 29.4%.

Keywords