Applied Sciences (Jul 2023)

Continual Learning Approach for Continuous Data Stream Analysis in Dynamic Environments

  • K. Prasanna,
  • Mudassir Khan,
  • Saeed M. Alshahrani,
  • Ajmeera Kiran,
  • P. Phanindra Kumar Reddy,
  • Mofadal Alymani,
  • J. Chinna Babu

DOI
https://doi.org/10.3390/app13148004
Journal volume & issue
Vol. 13, no. 14
p. 8004

Abstract

Read online

Continuous data stream analysis primarily focuses on the unanticipated changes in the transmission of data distribution over time. Conceptual change is defined as the signal distribution changes over the transmission of continuous data streams. A drift detection scenario is set forth to develop methods and strategies for detecting, interpreting, and adapting to conceptual changes over data streams. Machine learning approaches can produce poor learning outcomes in the conceptual change environment if the sudden change is not addressed. Furthermore, due to developments in concept drift, learning methodologies have been significantly systematic in recent years. The research introduces a novel approach using the fully connected committee machine (FCM) and different activation functions to address conceptual changes in continuous data streams. It explores scenarios of continual learning and investigates the effects of over-learning and weight decay on concept drift. The findings demonstrate the effectiveness of the FCM framework and provide insights into improving machine learning approaches for continuous data stream analysis. We used a layered neural network framework to experiment with different scenarios of continual learning on continuous data streams in the presence of change in the data distribution using a fully connected committee machine (FCM). In this research, we conduct experiments in various scenarios using a layered neural network framework, specifically the fully connected committee machine (FCM), to address conceptual changes in continuous data streams for continual learning under a conceptual change in the data distribution. Sigmoidal and ReLU (Rectified Linear Unit) activation functions are considered for learning regression in layered neural networks. When the layered framework is trained from the input data stream, the regression scheme changes consciously in all scenarios. A fully connected committee machine (FCM) is trained to perform the tasks described in continual learning with M hidden units on dynamically generated inputs. In this method, we run Monte Carlo simulations with the same number of units on both sides, K and M, to define the advancement of intersections between several hidden units and the calculation of generalization error. This is applied to over-learnability as a method of over-forgetting, integrating weight decay, and examining its effects when a concept drift is presented.

Keywords