IEEE Access (Jan 2022)

RLOps: Development Life-Cycle of Reinforcement Learning Aided Open RAN

  • Peizheng Li,
  • Jonathan Thomas,
  • Xiaoyang Wang,
  • Ahmed Khalil,
  • Abdelrahim Ahmad,
  • Rui Inacio,
  • Shipra Kapoor,
  • Arjun Parekh,
  • Angela Doufexi,
  • Arman Shojaeifard,
  • Robert J. Piechocki

DOI
https://doi.org/10.1109/ACCESS.2022.3217511
Journal volume & issue
Vol. 10
pp. 113808 – 113826

Abstract

Read online

Radio access network (RAN) technologies continue to evolve, with Open RAN gaining the most recent momentum. In the O-RAN specifications, the RAN intelligent controllers (RICs) are software-defined orchestration and automation functions for the intelligent management of RAN. This article introduces principles for machine learning (ML), in particular, reinforcement learning (RL) applications in the O-RAN stack. Furthermore, we review the state-of-the-art research in wireless networks and cast it onto the RAN framework and the hierarchy of the O-RAN architecture. We provide a taxonomy for the challenges faced by ML/RL models throughout the development life-cycle: from the system specification to production deployment (data acquisition, model design, testing and management, etc.). To address the challenges, we integrate a set of existing MLOps principles with unique characteristics when RL agents are considered. This paper discusses a systematic model development, testing and validation life-cycle, termed: RLOps. We discuss fundamental parts of RLOps, which include: model specification, development, production environment serving, operations monitoring and safety/security. Based on these principles, we propose the best practices for RLOps to achieve an automated and reproducible model development process. At last, a holistic data analytics platform rooted in the O-RAN deployment is designed and implemented, aiming to embrace and fulfil the aforementioned principles and best practices of RLOps.

Keywords