EGU General Assembly 2021
© Author(s) 2021. This work is distributed under
the Creative Commons Attribution 4.0 License.

Anticipating the computational performance of Earth System Models for pre-exascale systems

Xavier Yepes-Arbós, Miguel Castrillo, Mario C. Acosta, and Kim Serradell
Xavier Yepes-Arbós et al.
  • Barcelona Supercomputing Center, Earth Sciences, Barcelona, Spain (

The increase in the capability of Earth System Models (ESMs) is strongly linked to the amount of computing power, given that the spatial resolution used for global climate experimentation is a limiting factor to correctly reproduce climate mean state and variability. However, higher spatial resolutions require new High Performance Computing (HPC) platforms, where the improvement of the computational efficiency of ESMs will be mandatory. In this context, porting a new ultra-high resolution configuration into a new and more powerful HPC cluster is a challenging task, involving technical expertise to deploy and improve the computational performance of such a novel configuration.

To take advantage of this foreseeable landscape, the new EC-Earth 4 climate model is being developed by coupling OpenIFS 43R3 and NEMO 4 as atmosphere and ocean components respectively. An important effort has been made to improve the computational efficiency of this new EC-Earth version, such as extending the asynchronous I/O capabilities of the XIOS server to OpenIFS. 

In order to anticipate the computational behaviour of EC-Earth 4 for new pre-exascale machines such as the upcoming MareNostrum 5 of the Barcelona Supercomputing Center (BSC), OpenIFS and NEMO models are therefore benchmarked on a petascale machine (MareNostrum 4) to find potential computational bottlenecks introduced by new developments or to investigate if previous known performance limitations are solved. The outcome of this work can also be used to efficiently set up new ultra-high resolutions from a computational point of view, not only for EC-Earth, but also for other ESMs.

Our benchmarking consists of large strong scaling tests (tens of thousands of cores) by running different output configurations, such as changing multiple XIOS parameters and number of 2D and 3D fields. These very large tests need a huge amount of computational resources (up to 2,595 nodes, 75 % of the supercomputer), so they require a special allocation that can be applied once a year.

OpenIFS is evaluated with a 9 km global horizontal resolution (Tco1279) and using three different output data sets: no output, CMIP6-based fields and huge output volume (8.8 TB) to stress the I/O part. In addition, different XIOS parameters, XIOS resources, affinity, MPI-OpenMP hybridisation and MPI library are tested. Results suggest new features introduced in 43R3 do not represent a bottleneck in terms of performance as the model scales. The I/O scheme is also improved when outputting data through XIOS according to the scalability curve.

NEMO is scaled using a 3 km global horizontal resolution (ORCA36) with and without the sea-ice module. As in OpenIFS, different I/O configurations are benchmarked, such as disabling model output, only enabling 2D fields, or either producing 3D variables on an hourly basis. XIOS is also scaled and tested with different parameters. While NEMO has good scalability during the most part of the exercise, a severe degradation is observed before the model uses 70% of the machine resources (2,546 nodes). The I/O overhead is moderate for the best XIOS configuration, but it demands many resources.

How to cite: Yepes-Arbós, X., Castrillo, M., C. Acosta, M., and Serradell, K.: Anticipating the computational performance of Earth System Models for pre-exascale systems, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-7888,, 2021.

Display materials

Display file

Comments on the display material

to access the discussion