EGU25-9151, updated on 14 Mar 2025
https://doi.org/10.5194/egusphere-egu25-9151
EGU General Assembly 2025
© Author(s) 2025. This work is distributed under
the Creative Commons Attribution 4.0 License.
Poster | Thursday, 01 May, 14:00–15:45 (CEST), Display time Thursday, 01 May, 14:00–18:00
 
Hall X1, X1.154
Scaling staggered grid code on pre-exascale machines
Iskander Ibragimov, Boris Kaus, and Anton Popov
Iskander Ibragimov et al.
  • Johannes-Gutenberg University Mainz, Geosciences, Mainz, Germany (iskander.ibragimov.mainz@gmail.com)

The transition to exascale (>1000 Petaflops) computing necessitates the adaptation of numerical modeling tools to efficiently utilize emerging high-performance computing architectures. Within the ChEESE-2P project, the further development of LaMEM (Lithosphere and Mantle Evolution Model) focuses on achieving scalable performance on advanced systems, including the EuroHPC supercomputer LUMI (currently #3 in Europe). Taking advantage of using the PETSc library, LaMEM demonstrates strong and weak scalability, achieving linear performance up to 512 compute nodes and supporting high-resolution simulations with grids up to 10243

In response to the increasing emphasis on GPU-based computing, ongoing efforts are directed towards optimizing LaMEM for GPU architectures, including both NVIDIA and AMD systems. Preliminary results highlight significant progress in enabling GPU-accelerated runs and improving resource utilization. This work highlights LaMEM's ability to perform large-scale geodynamic simulations, contributing to the broader goal of integrating physics-based models with available data.

How to cite: Ibragimov, I., Kaus, B., and Popov, A.: Scaling staggered grid code on pre-exascale machines, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-9151, https://doi.org/10.5194/egusphere-egu25-9151, 2025.