IEEE Access (Jan 2020)

An Accelerated Edge Cloud System for Energy Data Stream Processing Based on Adaptive Incremental Deep Learning Scheme

  • Seong-Hwan Kim,
  • Changha Lee,
  • Chan-Hyun Youn

DOI
https://doi.org/10.1109/ACCESS.2020.3033771
Journal volume & issue
Vol. 8
pp. 195341 – 195358

Abstract

Read online

As smart metering technology evolves, power suppliers can make low-cost, low-risk estimation of customer-side power consumption by analyzing energy demand data collected in real-time. With advances network infrastructure, smart sensors, and various monitoring technologies, a standardized energy metering infrastructure, called advanced metering infrastructure (AMI), has been introduced and deployed to urban households to allow them to develop efficient power generation plans. Compared to traditional stochastic approaches for time-series data analysis, deep-learning methods have shown superior accuracy on many prediction applications. Because smart meters and infrastructure monitors produce a series of measurements over time, a large amount of data is accumulated, creating a large data stream, which takes much time from data generation to deployment of deep-learning model training. In this article, we propose an accelerated computing system that considers time-variant properties for accurate prediction of energy demand by processing the AMI stream data. The proposed system is a real-time training/inference system that deploys AMI data over a distributed edge cloud. It comprises two core components: an adaptive incremental learning solver and deep-learning acceleration with FPGA-GPU resource scheduling. An adaptive incremental learning scheme adjusts the batch/epoch in training iteration to reduce the time delay of the latest trained model, while trying to prevent biased-training due to the sub-optimal optimizer of incremental learning. In addition, a resource scheduling scheme manages various accelerator resources for accelerated deep-learning processing while minimizing the computational cost. The experimental results demonstrated that our method achieved good performance for adaptive batch size and epoch for incremental learning while guaranteeing a low inference error, a high model score, and queue stability with cost efficient processing.

Keywords