IEEE Access (Jan 2024)
Fine-Tuning Quadcopter Control Parameters via Deep Actor-Critic Learning Framework: An Exploration of Nonlinear Stability Analysis and Intelligent Gain Tuning
Abstract
Quadcopters have underactuated, nonlinear, and coupled dynamics, making their control a challenging endeavor. However, PID controllers have exhibited remarkable performance for such systems in a variety of circumstances, including obstacle avoidance, trajectory tracking, and route planning. The outer loop handles mission-level goals, while the inner loop is responsible for its stability and control. Nevertheless, gain optimization for nonlinear systems using heuristics or rule-based approaches is a laborious, time-consuming, and challenging task. This study implements an optimal gain self-tuning framework for the altitude, attitude, and position controllers of a 6 degrees-of-freedom nonlinear drone system using a deep reinforcement learning algorithm with continuous observation and action spaces. The state equations are obtained using the Lagrange method, whilst the aerodynamic coefficients are computed numerically using blade element momentum theory. In addition, the asymptotic stability of the cascaded closed loop nonlinear system is investigated using Lyapunov theory. The proposed technique is validated via simulations, which demonstrate that the quadcopter can closely follow a specified trajectory by using the optimized gains. Most notably, these optimal gains fulfil the constraints derived from Lyapunov’s stability analysis, suggesting that reinforcement learning is an incredibly powerful instrument that can evaluate the uncertainties present in any complicated nonlinear system.
Keywords