Skip to main content
Article
Using the Loop Chain Abstraction to Schedule Across Loops in Existing Code
International Journal of High Performance Computing and Networking
  • Ian J. Bertolacci, University of Arizona
  • Michelle Mills Strout, University of Arizona
  • Jordan Riley, Colorado State University
  • Stephen M. J. Guzik, Colorado State University
  • Eddie C. Davis, Boise State University
  • Catherine Olschanowsky, Boise State University
Document Type
Article
Publication Date
1-1-2019
Disciplines
Abstract

Exposing opportunities for parallelisation while explicitly managing data locality is the primary challenge to porting and optimising computational science simulation codes to improve performance. OpenMP provides mechanisms for expressing parallelism, but it remains the programmer's responsibility to group computations to improve data locality. The loop chain abstraction, where a summary of data access patterns is included as pragmas associated with parallel loops, provides compilers with sufficient information to automate the parallelism versus data locality trade-off. We present the syntax and semantics of loop chain pragmas for indicating information about loops belonging to the loop chain and specification of a high-level schedule for the loop chain. We show example usage of the pragmas, detail attempts to automate the transformation of a legacy scientific code written with specific language constraints to loop chain codes, describe the compiler implementation for loop chain pragmas, and exhibit performance results for a computational fluid dynamics benchmark.

Citation Information
Ian J. Bertolacci, Michelle Mills Strout, Jordan Riley, Stephen M. J. Guzik, et al.. "Using the Loop Chain Abstraction to Schedule Across Loops in Existing Code" International Journal of High Performance Computing and Networking (2019)
Available at: http://works.bepress.com/catherine-olschanowsky/22/