Ph.D. Alumni: Reza Mokhtari
Reference:
Reza Mokhtari
Methods for GPU acceleration of Big Data applications
Ph.D. Thesis, Department of Electrical and Computer Engineering, University of Toronto, Toronto, Canada, 2017.
Supervisor(s):
Michael Stumm
Download Thesis:
Abstract:
Big Data applications are trivially parallelizable because they typically consist of simple and straightforwardoperations performed on a large number of independent input records. GPUs appear to be particularly well suitedfor this class of applications given their high degree of parallelism and high memory bandwidth. However, anumber of issues severely complicate matters when trying to exploit GPUs to accelerate these applications. First,Big Data is often too large to fit in the GPU's separate, limited-sized memory. Second, data transfers to andfrom GPUs are expensive because the bus that connects CPUs and GPUs has limited bandwidth and high latency;in practice, this often results in data-starved GPU cores. Third, GPU memory bandwidth is high only if data islayed out in memory such that the GPU threads accessing memory at the same time access adjacent memory;unfortunately this is not how Big Data is layed out in practice.This dissertation presents three solutions that help mitigate the above issues and enable GPU-accelerationof Big Data applications, namely BigKernel, a system that automates and optimizes CPU-GPU communicationand GPU memory accesses, S-L1, a caching subsystem implemented in software, and a hash table designed forGPUs. Our key contributions include: (i) the first automatic CPU-GPU data management system that improveson the performance of state-of-the-art double-buffering scheme (a scheme that overlaps communication withcomputation to improve the GPU performance), (ii) a GPU level 1 cache implemented entirely in the softwarethat outperforms hardware L1 when used by Big Data applications and, (iii) a GPU-based hash table (for storingkey-value pairs popular in Big Data applications) that can grow beyond the available GPU memory yet retainreasonable performance. These solutions allow many existing Big Data applications to be ported to GPUs ina straightforward way and achieve performance gains of between 1.04X and 7.2X over the fastest CPU-basedmulti-threaded implementations.
Keywords:
GPU, GPGPU, Big Data, Memory Mangagement
BibTeX:
@phdthesis(Mokhtari-PhD17, author = {Reza Mokhtari}, title = {Methods for GPU acceleration of Big Data applications}, school = {Department of Electrical and Computer Engineering, University of Toronto}, address = {Toronto, Canada}, supervisors = {Michael Stumm}, month = {June}, year = {2017}, keywords = {GPU, GPGPU, Big Data, Memory Mangagement} )