IEEE Access (Jan 2018)
Cache Access Fairness in 3D Mesh-Based NUCA
Abstract
Given the increase in cache capacity over the past few decades, cache access efficiency has come to play a critical role in determining system performance. To ensure efficient utilization of the cache resources, non-uniform cache architecture (NUCA) has been proposed to allow for a large capacity and a short access latency. With the support of networks-on-chip (NoC), NUCA is often employed to organize the last level cache. However, this method also hurts cache access fairness, which denotes the degree of non-uniformity for cache access latencies. This drop in fairness can result in an increased number of cache accesses with overhigh latency, which leads to a bottleneck in system performance. This paper investigates the cache access fairness in the context of NoC-based 3-D chip architecture, and provides new insights into 3-D architecture design. We propose fair-NUCA (F-NUCA), a co-design scheme intended to optimize cache access fairness. In F-NUCA, we strive to improve fairness by equalizing cache access latencies. To achieve this goal, the memory mapping and the channel width are both redistributed non-uniformly, thereby equalizing the non-contention and contention latencies, respectively. The experimental results reveal that F-NUCA can effectively improve cache access fairness. When F-NUCA is compared with the traditional static NUCA in a simulation with PARSEC benchmarks, the average reductions in average latency and latency standard deviation are 4.64%/9.38% for a 4 × 4 × 2 mesh network, as well as 6.31%/13.51% for a 4 × 4 × 4 mesh network. In addition, a 4.0%/6.4% improvement in system throughput can be achieved for the two scales of mesh networks, respectively.
Keywords