Machine Learning: Science and Technology (Jan 2024)

Assessing the quality of random number generators through neural networks

  • José Luis Crespo,
  • Javier González-Villa,
  • Jaime Gutiérrez,
  • Angel Valle

DOI
https://doi.org/10.1088/2632-2153/ad56fb
Journal volume & issue
Vol. 5, no. 2
p. 025072

Abstract

Read online

In this paper we address the use of Neural Networks (NNs) for the assessment of the quality and hence safety of several Random Number Generators (RNGs), focusing both on the vulnerability of classical Pseudo Random Number Generators (PRNGs), such as Linear Congruential Generators (LCGs) and the RC4 algorithm, and extending our analysis to non-conventional data sources, such as Quantum Random Number Generators (QRNGs) based on Vertical-Cavity Surface-Emitting Laser (VCSEL). Among the results found, we have classified the generators based on the capability of the NN to distinguish between the RNG and a Golden Standard RNG (GSRNG). We show that sequences from simple PRNGs like LCGs and RC4 can be distinguished from the GSRNG. We also show that sequences from LCG on elliptic curves and VCSEL-based QRNG can not be distinguished from the GSRNG even with the biggest long-short term memory or convolutional neural networks (CNNs) that we have considered. We underline the fundamental role of design decisions in enhancing the safety of RNGs. The influence of network architecture design and associated hyper-parameters variations was also explored. We show that longer sequence lengths and CNNs are more effective for discriminating RNGs against the GSRNG. Moreover, in the prediction domain, the proposed model is able to deftly distinguish between the raw data of our QRNG and data from the GSRNG exhibiting a cross-entropy error of 0.52 on the test data-set used. All these findings reveal the potential of NNs to enhance the security of RNGs, while highlighting the robustness of certain QRNGs, in particular the VCSEL-based variants, for high-quality random number generation applications.

Keywords