Skip to main content
Contribution to Book
A Parallel Sparse Tensor Benchmark Suite on CPUs and GPUs
PPoPP '20: Proceedings of the 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (2020)
  • Jiajia Li, Pacific Northwest National Laboratory
  • Mahesh Lakshminarasimhan, University of Utah
  • Xiaolong Wu, Purdue University
  • Ang Li, Pacific Northwest National Laboratory
  • Catherine Olschanowsky, Boise State University
  • Kevin Barker, Pacific Northwest National Laboratory
Abstract
Tensor computations present significant performance challenges that impact a wide spectrum of applications. Efforts on improving the performance of tensor computations include exploring data layout, execution scheduling, and parallelism in common tensor kernels. This work presents a benchmark suite for arbitrary-order sparse tensor kernels using state-ofthe-art tensor formats: coordinate (COO) and hierarchical coordinate (HiCOO). It demonstrates a set of reference tensor kernel implementations and some observations on Intel CPUs and NVIDIA GPUs. The full paper can be referred to at http://arxiv.org/abs/2001.00660.
Keywords
  • sparse tensors,
  • benchmark,
  • GPU,
  • roofline model
Disciplines
Publication Date
2020
Publisher
Association for Computing Machinery
ISBN
978-1-4503-6818-6
DOI
10.1145/3332466.3374513
Publisher Statement
This is a poster proceeding.
Citation Information
Jiajia Li, Mahesh Lakshminarasimhan, Xiaolong Wu, Ang Li, et al.. "A Parallel Sparse Tensor Benchmark Suite on CPUs and GPUs" New YorkPPoPP '20: Proceedings of the 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (2020) p. 403 - 404
Available at: http://works.bepress.com/catherine-olschanowsky/13/