- 1Oak Ridge National Laboratory, Oak Ridge, United States of America (forrest@climatemodeling.org, collierno@ornl.gov, xum1@ornl.gov)
- 2DLR, Oberpfaffenhofen, Germany (Birgit.Hassler@dlr.de)
- 3Climate Resource, Melbourne, Australia (jared.lewis@climate-resource.com)
- 4E-Science Center, Amsterdam, Netherlands (b.andela@esciencecenter.nl)
- 5Lawrence Livermore National Laboratory, Livermore, United States of America (lee1043@llnl.gov, ordonez4@llnl.gov, ullrich4@llnl.gov)
- 6CMIP International Program Office, Harwell, United Kingdom (Briony.Turner@ext.esa.int)
- *A full list of authors appears at the end of the abstract
The goal of the Coupled Model Intercomparison Project (CMIP) is to better understand past, present, and future changes in the Earth system in a multi-model context. In an effort to increase the project’s scientific and societal relevance, improve accessibility, and widen participation, the CMIP Panel advocated for establishing a number of Task Teams aimed at supporting the design, scope, and definition of the next phase of CMIP, as well as the evolution of infrastructure for and operationalization of CMIP activities.
An important prerequisite for providing credible information about the Earth system from models is to understand their capabilities and limitations. Thus, systematic and comprehensive assessment of models in comparison with best-available observational and reanalysis data is essential. For CMIP7 new model evaluation challenges stemming from higher resolution, enhanced complexity, and machine learning components need to be rigorously addressed. The Climate Model Benchmarking Task Team aims to provide a vision and concrete guidance for establishing a systematic, open, and rapid performance evaluation of the expected large number of models participating in CMIP7, including a variety of performance metrics and informative diagnostics. The Task Team designed a plan for a community Rapid Evaluation Framework (REF) that would leverage and integrate existing open source community model evaluation tools for benchmarking the performance of CMIP simulations contributed by participating modeling centers.
Based on community input, an initial set of metrics and diagnostics were identified to be applied to Intergovernmental Panel on Climate Change (IPCC) Seventh Assessment Report (AR7) Fast Track simulations. With co-sponsorship from the US Department of Energy (DOE) and the European Space Agency (ESA), the development of the REF was launched in November 2024. The REF delivery team will integrate evaluation tools and a workflow system that will be deployed at two or more primary Earth System Grid Federation (ESGF) node sites to provide automated production of diagnostic information for CMIP model developers, data users, and stakeholders. The REF is expected to evolve to provide additional metrics and diagnostics and to use more data products in the future through the guidance of a community panel or consortium that will be formed in the coming year.
Birgit Hassler, Forrest Hoffman, Ranjini Swaminathan, Rebecca Beadling, Ed Blockley, Jiwoo Lee, Valerio Lembo, Jared Lewis, Jianhua Lu, Luke Madaus, Elizaveta Malinina, Brian Medeiros, Wilfried Pokam Mba, and Enrico Scoccimarro
How to cite: Hoffman, F., Hassler, B., Lewis, J., Andela, B., Collier, N., Lee, J., Ordonez, A., Turner, B., Ullrich, P., and Xu, M. and the CMIP Climate Model Benchmarking Task Team: The CMIP Rapid Evaluation Framework (REF) for automated and systematic benchmarking of coupled models, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-14773, https://doi.org/10.5194/egusphere-egu25-14773, 2025.