EURASIP Journal on Wireless Communications and Networking (Jan 2010)
Multiagent Q-Learning for Aloha-Like Spectrum Access in Cognitive Radio Systems
Abstract
An Aloha-like spectrum access scheme without negotiation is considered for multiuser and multichannel cognitive radio systems. To avoid collisions incurred by the lack of coordination, each secondary user learns how to select channels according to its experience. Multiagent reinforcement leaning (MARL) is applied for the secondary users to learn good strategies of channel selection. Specifically, the framework of Q-learning is extended from single user case to multiagent case by considering other secondary users as a part of the environment. The dynamics of the Q-learning are illustrated using a Metrick-Polak plot, which shows the traces of Q-values in the two-user case. For both complete and partial observation cases, rigorous proofs of the convergence of multiagent Q-learning without communications, under certain conditions, are provided using the Robins-Monro algorithm and contraction mapping, respectively. The learning performance (speed and gain in utility) is evaluated by numerical simulations.