Entropy (Jul 2017)

Competitive Sharing of Spectrum: Reservation Obfuscation and Verification Strategies

  • Andrey Garnaev,
  • Wade Trappe

DOI
https://doi.org/10.3390/e19070363
Journal volume & issue
Vol. 19, no. 7
p. 363

Abstract

Read online

Sharing of radio spectrum between different types of wireless systems (e.g., different service providers) is the foundation for making more efficient usage of spectrum. Cognitive radio technologies have spurred the design of spectrum servers that coordinate the sharing of spectrum between different wireless systems. These servers receive information regarding the needs of each system, and then provide instructions back to each system regarding the spectrum bands they may use. This sharing of information is complicated by the fact that these systems are often in competition with each other: each system desires to use as much of the spectrum as possible to support its users, and each system could learn and harm the bands of the other system. Three problems arise in such a spectrum-sharing problem: (1) how to maintain reliable performance for each system-shared resource (licensed spectrum); (2) whether to believe the resource requests announced by each agent; and (3) if they do not believe, how much effort should be devoted to inspecting spectrum so as to prevent possible malicious activity. Since this problem can arise for a variety of wireless systems, we present an abstract formulation in which the agents or spectrum server introduces obfuscation in the resource assignment to maintain reliability. We derive a closed form expression for the expected damage that can arise from possible malicious activity, and using this formula we find a tradeoff between the amount of extra decoys that must be used in order to support higher communication fidelity against potential interference, and the cost of maintaining this reliability. Then, we examine a scenario where a smart adversary may also use obfuscation itself, and formulate the scenario as a signaling game, which can be solved by applying a classical iterative forward-induction algorithm. For an important particular case, the game is solved in a closed form, which gives conditions for deciding whether an agent can be trusted, or whether its request should be inspected and how intensely it should be inspected.

Keywords