Sensors (Jul 2024)

A Parallel Compression Pipeline for Improving GPU Virtualization Data Transfers

  • Cristian Peñaranda,
  • Carlos Reaño,
  • Federico Silla

DOI
https://doi.org/10.3390/s24144649
Journal volume & issue
Vol. 24, no. 14
p. 4649

Abstract

Read online

GPUs are commonly used to accelerate the execution of applications in domains such as deep learning. Deep learning applications are applied to an increasing variety of scenarios, with edge computing being one of them. However, edge devices present severe computing power and energy limitations. In this context, the use of remote GPU virtualization solutions is an efficient way to address these concerns. Nevertheless, the limited network bandwidth might be an issue. This limitation can be alleviated by leveraging on-the-fly compression within the communication layer of remote GPU virtualization solutions. In this way, data exchanged with the remote GPU is transparently compressed before being transmitted, thus increasing network bandwidth in practice. In this paper, we present the implementation of a parallel compression pipeline designed to be used within remote GPU virtualization solutions. A thorough performance analysis shows that network bandwidth can be increased by a factor of up to 2×.

Keywords