IEEE Access (Jan 2020)

A Perpetual Learning Algorithm That Incrementally Improves Performance With Deliberation

  • Haiou Qin,
  • Du Zhang

DOI
https://doi.org/10.1109/ACCESS.2020.3009718
Journal volume & issue
Vol. 8
pp. 131425 – 131438

Abstract

Read online

During recent years, several different proposals for continuous learning (lifelong learning, never-ending learning, perpetual learning) have attracted much attention from researchers in the field of machine learning. In this paper, a perpetual learning algorithm, which is augmented with a deliberation mechanism that is geared toward incrementally improving performance on all tasks learned thus far, is described. The algorithm maintains a prototype library where each prototype in the library is a model parameters vector for a representative task learned, and produces the model for a new task as a linear combination of the prototypes. The deliberation process ensues based on two criteria (difference in loss and cosine similarity between single task model and encoded model) for the newly learned task model. If the prototypes in the library represent a new task well, then the library only needs to be tuned. Otherwise, if the prototypes in the library cannot represent a new task well, then the algorithm further determines if replacing a prototype in the library makes the library better suited for all learned tasks. If so, the prototype library will be reconstructed. Compared with other existing methods, the proposed method has several salient features. Firstly, by selecting the best prototypes to form the library to represent all learned tasks, the algorithm provided in the paper has better classification and regression performance than, say, Efficient Lifelong Learning Algorithm (ELLA) in circumstances in which only the training set for the current task can be buffered. Secondly, without the need to buffer training sets for multiple tasks, the proposed algorithm shows competitive performance when compared with curriculum learning.

Keywords