IEEE Access (Jan 2021)

Alleviating I/O Interference in Virtualized Systems With VM-Aware Persistency Control

  • Taehyung Lee,
  • Minho Lee,
  • Young Ik Eom

DOI
https://doi.org/10.1109/ACCESS.2021.3090865
Journal volume & issue
Vol. 9
pp. 89263 – 89275

Abstract

Read online

Consolidating multiple servers into a physical machine is now a commonplace in cloud infrastructures. The virtualized systems often arrange virtual disks of multiple virtual machines (VMs) on the same underlying storage device while striving to guarantee the service level objective (SLO) of the performance of each VM. Unfortunately, sync operations called by a VM may make it hard to satisfy the performance SLO by disturbing I/O activities of other VMs. In this paper, we experimentally uncover that the disk cache flush operation incurs significant I/O interference among VMs, and revisit the internal architecture and flush mechanism of the flash memory-based SSD. Then, we present vFLUSH, a novel VM-aware flush mechanism, that supports VM-based persistency control for the disk cache flush operation. We also discuss the long-tail latency issue in vFLUSH and an efficient scheme for mitigating the problem. Our evaluation with various micro- and macro-benchmarks shows that vFLUSH reduces the average latency of disk cache flush operations by up to 58.5%, thereby producing improvements in throughput by up to $1.93\times $ . The method for alleviating the long-tail latency problem, which is applied to vFLUSH, achieves a significant reduction in tail latency by up to 75.9%, with a modest throughput degradation by 2.9–7.2%.

Keywords