IEEE Access (Jan 2024)

Solar Irradiance Forecasting Using a Hybrid Quantum Neural Network: A Comparison on GPU-Based Workflow Development Platforms

  • Ying-Yi Hong,
  • Dylan Josh Domingo Lopez,
  • Yun-Yuan Wang

DOI
https://doi.org/10.1109/ACCESS.2024.3472053
Journal volume & issue
Vol. 12
pp. 145079 – 145094

Abstract

Read online

Modern renewable power operations can be enhanced by integrating deep neural networks, particularly for forecasting solar irradiance. Recent advancements in quantum computing have shown potential improvements in classical deep neural networks. However, current challenges with quantum hardware, such as susceptibility to noise and decoherence, pose risks to its practicality. Hybrid quantum neural networks (HQNNs) are found to mitigate these issues, especially when integrated with graphics processing unit (GPU)-based pipelines. This paper presents a comparative study of different software platforms for developing HQNNs, using multi-location very short-term solar irradiance forecasting as an example. A classical benchmark model is initially designed based on statistical analysis of a 10-minute resolution solar irradiance dataset, with its parameters further optimized using Bayesian Optimization. The experimental design of this paper includes a loss comparison between classical neural networks and HQNNs across different seasons and a performance comparison between Pennylane, Torchquantum, and CUDA Quantum (CUDA-Q) as HQNN development platforms. Experimental results show that HQNNs achieve up to a 92.30% improvement in testing loss compared to classical neural networks. Regarding HQNN development platforms, Pennylane shows an 81.54% testing loss reduction from classical models, Torchquantum shows a 90.34% improvement, and CUDA-Q shows a 92.30% improvement in testing loss. Implementing hardware acceleration libraries for GPU-based state vector simulation demonstrates an approximate 275% speedup in average latency per epoch, a 218% speedup in inference time, and a 10.20% improvement in testing loss compared to CPU-based simulations. CUDA-Q achieves a training time 2.7 times shorter and an inference time 2.9 times shorter compared to Pennylane, while it is 32.3 times faster in training and 31 times faster in inference compared to Torchquantum.

Keywords