Efficient Sparse Matrix-Vector Multiplication on GPUs using the CSR Storage Format


Published in the Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC14), November, 2014 (acceptance rate: 83/394 ≈ 21%)


Joseph L. Greathouse, Mayank Daga


The performance of sparse matrix vector multiplication (SpMV) is important to computational scientists. Compressed sparse row (CSR) is the most frequently used format to store sparse matrices. However, CSR-based SpMV on graphics processing units (GPUs) has poor performance due to irregular memory access patterns, load imbalance, and reduced parallelism. This has led researchers to propose new storage formats. Unfortunately, dynamically transforming CSR into these formats has significant runtime and storage overheads.

We propose a novel algorithm, CSR-Adaptive, which keeps the CSR format intact and maps well to GPUs. Our implementation addresses the aforementioned challenges by (i) efficiently accessing DRAM by streaming data into the local scratchpad memory and (ii) dynamically assigning different numbers of rows to each parallel GPU compute unit. CSR-Adaptive achieves an average speedup of 14.7x over existing CSR-based algorithms and 2.3x over clSpMV cocktail, which uses an assortment of matrix formats.




PPTX | PPT | PDF Copyright © 2014 IEEE. Hosted on this personal website as per this IEEE policy.