IEEE Access (Jan 2020)
Hardware Acceleration for Container Migration on Resource-Constrained Platforms
Abstract
The computing capabilities of client devices are continuously increasing; at the same time, demands for ultra-low latency (ULL) services are increasing. These ULL services can be provided by migrating some micro-service container computations from the cloud and multi-access edge computing (MEC) to the client devices. The migration of a container image requires compression and decompression, which are computationally demanding. We quantitatively examine the hardware acceleration of container image compression and decompression on a client device. Specifically, we compare the Intel® Quick Assist Technology (QAT) hardware acceleration with software compression/decompression. For scenarios with a local container image registry (i.e., without network bandwidth constraints), we find that Intel QAT speeds up compression by a factor of over 7 compared to the single-core GZIP software and reduces the CPU core utilization by over 15% for large container images. These Intel QAT benefits come at the expense of Input/Output (IO) memory access bitrates of up to 900 Mbyte/s (while the software compression/decompression does not require IO memory access). For scenarios with a remote container image registry, we find that the container push (compression) time savings increase with the network bandwidth, while the container pull (decompression) time savings level out for moderately high network bandwidths and slightly decrease for a very high network bandwidth. Furthermore, Intel QAT acceleration achieves substantial power consumption reductions for container push compression for low to moderately high network bandwidths. Our evaluation results give reference performance benchmarks of the achievable latencies for container image instantiation and migration with and without hardware acceleration of the compression and decompression of container images.
Keywords