Data Science and Engineering (Mar 2019)
Novel Parallel Algorithms for Fast Multi-GPU-Based Generation of Massive Scale-Free Networks
Abstract
Abstract A novel parallel algorithm is presented for generating random scale-free networks using the preferential attachment model. The algorithm, named cuPPA, is custom-designed for “single instruction multiple data (SIMD)” style of parallel processing supported by modern processors such as graphical processing units (GPUs). To the best of our knowledge, our algorithm is the first to exploit GPUs, and also the fastest implementation available today, to generate scale-free networks using the preferential attachment model. A detailed performance study is presented to understand the scalability and runtime characteristics of the cuPPA algorithm. Also another version of the algorithm called cuPPA-Hash tailored for multiple GPUs is presented. On a single GPU, the original cuPPA algorithm delivers the best performance, but is challenging to port to multi-GPU implementation. For multi-GPU implementation, cuPPA-Hash has been used as the parallel algorithm to achieve a perfect linear speedup up to 4 GPUs. In one of the best cases, when executed on an NVidia GeForce 1080 GPU, the original cuPPA generates a scale-free network of two billion edges in less than 3 s. On multi-GPU platforms, cuPPA-Hash generates a scale-free network of 16 billion edges in less than 7 s using a machine consisting of 4 NVidia Tesla P100 GPUs.
Keywords