Skip to main content
Contribution to Book
A Sparse Tensor Benchmark Suite for CPUs and GPUs
2020 IEEE International Symposium on Workload Characterization
  • Jiajia Li, Pacific Northwest National Laboratory
  • Mahesh Lakshminarasimhan, University of Utah
  • Xiaolong Wu, Purdue University
  • Ang Li, Pacific Northwest National Laboratory
  • Catherine Olschanowsky, Boise State University
  • Kevin Barker, Pacific Northwest National Laboratory
Document Type
Conference Proceeding
Publication Date
1-1-2020
Disciplines
Abstract

Tensor computations present significant performance challenges that impact a wide spectrum of applications ranging from machine learning, healthcare analytics, social network analysis, data mining to quantum chemistry and signal processing. Efforts to improve the performance of tensor computations include exploring data layout, execution scheduling, and parallelism in common tensor kernels. This work presents a benchmark suite for arbitrary-order sparse tensor kernels using state-of-the-art tensor formats: coordinate (COO) and hierarchical coordinate (HiCOO) on CPUs and GPUs. It presents a set of reference tensor kernel implementations that are compatible with real-world tensors and power law tensors extended from synthetic graph generation techniques. We also propose Roofline performance models for these kernels to provide insights of computer platforms from sparse tensor view. This benchmark suite along with the synthetic tensor generator is publicly available.

Citation Information
Li, Jiajia; Lakshminarasimhan, Mahesh; Wu, Xiaolong; Li, Ang; Olschanowsky, Catherine; and Barker, Kevin. (2020). "A Sparse Tensor Benchmark Suite for CPUs and GPUs". In 2020 IEEE International Symposium on Workload Characterization (pp. 193-204). IEEE. https://doi.org/10.1109/IISWC50251.2020.00027