GC11-solidearth-65, updated on 16 Mar 2023
https://doi.org/10.5194/egusphere-gc11-solidearth-65
Galileo Conference: Solid Earth and Geohazards in the Exascale Era
© Author(s) 2023. This work is distributed under
the Creative Commons Attribution 4.0 License.

Enabling End-to-End Accelerated Multiphysics Simulations in the Exascale Era Using PETSc

Richard Tran Mills1, Mark Adams2, Satish Balay1, Jed Brown3, Jacob Faibussowitsch1, Matthew G. Knepley4, Scott Kruger5, Hannah Morgan6, Todd Munson1, Karl Rupp7, Barry Smith8, Stefano Zampini9, Hong Zhang1, and Junchao Zhang1
Richard Tran Mills et al.
  • 1Argonne National Laboratory
  • 2Lawrence Berkeley National Laboratory
  • 3University of Colorado Boulder
  • 4University at Buffalo
  • 5Tech-X Corporation
  • 6University of Chicago
  • 7Technische Universität Wien
  • 8Flatiron Institute
  • 9King Abdullah University of Science & Technology (KAUST)

The Portable Extensible Toolkit for Scientific Computation (PETSc) library provides scalable solvers for nonlinear time-dependent differential and algebraic equations and for numerical optimization; it is used in dozens of scientific fields, and has been an important building block for many computational geoscience applications. Starting from the terascale era in the 1990s and continuing into the present dawn of the exascale era, a major goal of PETSc development has been achieving the scalability required to fully utilize leadership-class supercomputers. We will describe some of the algorithmic developments made during the era in which achieving inter-node scalability was the primary challenge to enabling extreme-scale computation, and then survey the challenges posed by the current era in which harnessing the abundant fine-scale parallelism within compute nodes -- primarily in the form of GPU-based accelerators -- has assumed at least equal importance. We will discuss how the PETSc design for performance portability addresses these challenges while stressing flexibility and extensibility by separating the programming model used by application code from that used by the library. Additionally, we will discuss recent developments in PETSc's communication module, PetscSF, that enable flexibility and scalable performance across large GPU-based systems while overcoming some of the difficulties posed by working directly with the Message Passing Interface (MPI) on such systems. A particular goal of this talk will be to go beyond simply describing the work performed to prepare PETSc and simulation codes that rely on it to run on exascale-class systems, but to enumerate the challenges we encountered and to share the essential lessons learned that can help other developers to prepare and optimize their high-performance scientific computing codes for the exascale era.

How to cite: Tran Mills, R., Adams, M., Balay, S., Brown, J., Faibussowitsch, J., G. Knepley, M., Kruger, S., Morgan, H., Munson, T., Rupp, K., Smith, B., Zampini, S., Zhang, H., and Zhang, J.: Enabling End-to-End Accelerated Multiphysics Simulations in the Exascale Era Using PETSc, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-65, https://doi.org/10.5194/egusphere-gc11-solidearth-65, 2023.