OSA1.7 | Challenges in Weather and Climate Modelling: from model development via verification to operational perspectives
Challenges in Weather and Climate Modelling: from model development via verification to operational perspectives
Conveners: Estíbaliz Gascón, Daniel Reinert, Balázs Szintai | Co-conveners: Chiara Marsigli, Manfred Dorninger
Orals
| Wed, 06 Sep, 16:00–17:15 (CEST)|Lecture room B1.03
Posters
| Attendance Thu, 07 Sep, 16:00–17:15 (CEST) | Display Wed, 06 Sep, 10:00–Fri, 08 Sep, 13:00|Poster area 'Day room'
Orals |
Wed, 16:00
Thu, 16:00
This session will handle various aspects of scientific and operational collaboration related to weather and climate modelling. The session will be split into three sub-sessions which will focus on the following topics:

- Challenges in developing high-resolution mesoscale models with a focus on end-users and the EUMETNET forecasting programme. Observation impact studies to assess the importance of different parts of the observing system for global and limited area NWP models.

- Numerics and physics-dynamics coupling in weather and climate models: This encompasses the development, testing and application of novel numerical techniques, the coupling between the dynamical core and physical parameterizations, variable-resolution modelling, as well as performance aspects on current and future supercomputer architectures.

- Model verification: Developments and new approaches in the use of observations and verification techniques. It covers all verification aspects from research to applications to general verification practice and across all time and space scales. Highly welcome verification subjects including high-impact, user oriented applications, warnings against adverse weather events or events with high risk or user relevance.

Orals: Wed, 6 Sep | Lecture room B1.03

Chairperson: Estíbaliz Gascón
16:00–16:30
|
EMS2023-222
|
solicited
|
Onsite presentation
Alok Samantaray, Priscilla Mooney, and Carla Vivacqua

Many studies in climate change research rely on error metrics to evaluate the performance of climate models. However, the majority of these studies use only one or two metrics, which can limit the insights obtained from the analysis. This is because each metric evaluates only a specific aspect of the model-data relationship, and important information may be missed if not considered. To gain a more comprehensive understanding of model performance, it is necessary to use multiple error metrics. Doing so can reveal model strengths and weaknesses and provide insights for improving the model. Nevertheless, the choice of metrics should be based on the study's specific objectives and research questions, as different metrics may be more relevant or meaningful in different contexts. This study presents the Bergen Metric, a composite error metric that evaluates climate models' overall performance based on the p-norm framework. This approach utilizes a non-parametric clustering technique to reduce the number of error metrics without losing any relevant information. The research emphasizes the importance of using multiple error metrics to gain a thorough understanding of the model behavior. This study has evaluated 89 regional climate simulations of precipitation and temperature over Europe using 38 different error metrics for eight sub-regions in Europe, providing useful information about the metrics' performance in different regions. Furthermore, the study highlights the possibility of observing conflicting behavior among error metrics while examining a single model, underscoring the need for using multiple error metrics tailored to specific use cases. Overall, the Bergen Metric framework provides a useful tool to assess climate model performance and simplify the interpretation of results from multiple error metrics.

How to cite: Samantaray, A., Mooney, P., and Vivacqua, C.: A Framework to Evaluate the Climate Models, EMS Annual Meeting 2023, Bratislava, Slovakia, 4–8 Sep 2023, EMS2023-222, https://doi.org/10.5194/ems2023-222, 2023.

16:30–16:45
|
EMS2023-35
|
Onsite presentation
|
Tobias Necker, Ludwig Wolfgruber, Stefano Serafin, Manfred Dorninger, and Martin Weissmann

This study evaluates and demonstrates how to apply the Fractions Skill Score (FSS) for probabilistic forecast verification. The FSS is a spatial verification score originally designed for deterministic forecast verification. It is a neighborhood method frequently used to verify intermittent forecast fields, such as precipitation, that suffer from double penalty errors. Although the FSS was not designed for probabilistic verification, it is frequently used for verifying ensemble forecasts. However, as we show, systematic differences can occur depending on how an ensemble-based FSS is computed. 

Our study compares and evaluates four potential approaches for computing an FSS for ensemble forecasts. We study the dependence of these four approaches on different parameters, such as ensemble size, neighborhood size, or the forecast event frequency of occurrence. The dependence on ensemble size is examined using various subsamples of a large ensemble. Our comparison shows that the behavior of the FSS with ensemble size can vary greatly depending on the approach used to compute the score. Our experiments explore unique convective-scale 1000-member ensemble forecasts and precipitation over Germany for a high-impact summer weather period. For verification, we use random independent members as synthetic observation to minimize the effect of systematic errors, such as biases between the model and observation. In addition, we introduce a probabilistic believable scale and study its dependence on ensemble size. This evaluation highlights that a suitable ensemble size depends on the forecast event frequency. Our study can guide researchers who want to apply an FSS for ensemble forecast verification, as our findings provide insights on how to compute and interpret ensemble-based FSS results correctly.

How to cite: Necker, T., Wolfgruber, L., Serafin, S., Dorninger, M., and Weissmann, M.: How to use the fractions skill score for ensemble forecast verification, EMS Annual Meeting 2023, Bratislava, Slovakia, 4–8 Sep 2023, EMS2023-35, https://doi.org/10.5194/ems2023-35, 2023.

16:45–17:00
|
EMS2023-602
|
Onsite presentation
Llorenç Lledó, Gregor Skok, and Thomas Haiden

Due to the strongly fluctuating nature of precipitation both in space and time, there is a need for high-resolution forecasts in order to provide accurate information for user applications. However, as we transition to high-resolution forecasts, the predictability limits imposed by the physics of convective motions render traditional verification techniques less effective for measuring forecast quality. While high-resolution models might be able to realistically simulate convective motions and their associated precipitation, the exact location of the updrafts and the surface precipitation cannot be determined precisely. This poses a problem with classical point-to-point verification techniques such as root mean squared error (RMSE), because any displacement of the precipitation in the forecasts will result in a double penalty. There are three specific problems with RMSE when there are location errors: a) forecasts that have less variability than observations score better than misplaced but realistic forecasts in those cases, favouring unrealistic solutions; b) low-resolution forecasts can score better than more realistic high-resolution ones; and c) forecasts where an observed feature is misplaced but nearby receive the same score than forecasts that are misplaced and farther away.

Measuring the location error, i.e. the distance between precipitation spots in forecast and observation fields is an intuitive way to address the third issue. However, to measure the displacements, one needs to have an assignment between features in the forecasts and the observations. The Wasserstein distance, defined as the minimum displacement over all possible assignments, is a theoretical way forward. However, computing it is prohibitively expensive. Fortunately, there has been growing interest among the machine learning community in utilizing Wasserstein distances to circumvent too literal comparisons. As a result, new algorithms have been developed that can approximate Wasserstein distances and scale linearly with respect to the number of points to be assigned. In this presentation, we demonstrate the practical application of two fast approximate algorithms, namely the Flowtree and the Attribution distance methods, for measuring location errors. Both methods are very flexible, allowing the computation of location errors on gridded or unstructured datasets, even on the spheric geometry of the Earth. We showcase the utilization of these novel verification metrics in specific use cases with ECMWF forecasts to highlight their strengths and weaknesses.

How to cite: Lledó, L., Skok, G., and Haiden, T.: Estimating location errors in precipitation forecasts with the Wasserstein and Attribution distances, EMS Annual Meeting 2023, Bratislava, Slovakia, 4–8 Sep 2023, EMS2023-602, https://doi.org/10.5194/ems2023-602, 2023.

17:00–17:15
|
EMS2023-82
|
Online presentation
Josef Schröttle, Llorenç Lledó, Cristina Lupu, Chris Burrows, and Elias Holm

Within the framework of Destination Earth (https://digital-strategy.ec.europa.eu/en/policies/destination-earth) we are running global weather forecasts at the km-scale. A major goal is to improve forecasts of wind for renewable energy purposes, as well as precipitation in extreme weather events. Initialisation of the high-resolution forecasts requires analysis that can be constrained by high-resolution observations.

Our experiments use a relatively denser set of clear-sky radiances from geostationary satellites. In assimilating the clear-sky observations, we simultanously vary the spatial & temporal sampling. Short-range forecasts compared to independent humidity sensitive satellite instruments such as IASI and CrIS show significant improvements as well as all-sky instruments such as SSMIS. Comparing observation of wind measurements such as AEOLUS reveals a better agreement under certain conditions together with improvements in comparisons to conventional wind measurements.

Furthermore, we diagnose the effect on simulated satellite images for GOES and SEVIRI based on the study by Lopez & Matricardi (2022). Such a diagnostic tool allows for verification at the km-scale, as well as at very high temporal resolution down to 10 min. Comparing infrared, visible, water vapor observations with ifs simulated cloud fields we find better performance in the high-resolution simulations as briefly discussed by Lledo et al. (2022) using the fractional-skill score on clouds. In future experiments, we aim to compare forecasts at the km-scale based on assimilation experiments with higher temporal and spatial data density compared to the current operational setting of the IFS.

  • L. Lledó, T. Haiden, J. Schröttle, and R. Forbes, Scale-dependent verification of precipitation and cloudiness at ECMWF.ECMWF Newsletter Number 174 (2022)
  • P. Lopez and M. Matricardi, Validation of IFS+RTTOV/MFASIS0.64-μm reflectances against observations from GOES-16, GOES-17, MSG-4 and Himawari-8. ECMWF Technical Memoranda (2022)

How to cite: Schröttle, J., Lledó, L., Lupu, C., Burrows, C., and Holm, E.: On the benefits of assimilating a denser set of geostationary clear-sky radiance, EMS Annual Meeting 2023, Bratislava, Slovakia, 4–8 Sep 2023, EMS2023-82, https://doi.org/10.5194/ems2023-82, 2023.

Posters: Thu, 7 Sep, 16:00–17:15 | Poster area 'Day room'

Display time: Wed, 6 Sep 10:00–Fri, 8 Sep 13:00
Chairperson: Estíbaliz Gascón
P14
|
EMS2023-243
Gregor Skok

Precipitation is one of the most important meteorological parameters and is notoriously difficult to measure, predict, and also to verify. Distance measures are one of the five classes of spatial verification metrics that try to address the problems of traditionally used non-spatial methods (which only compare values at collocated grid points). Distance measures provide the results in terms of distance or displacement between the precipitation in the forecast and observation fields. Recently we developed a new distance measure, called the Precipitation Attribution Distance (PAD), that is based on a random nearest-neighbor attribution concept - it works by sequentially attributing randomly selected precipitation in one field to the closest precipitation in the other. Here we tried to adapt the PAD to provide localized verification information and take into account the spherical geometry of the Earth. Namely, most distance measures only provide a single estimate of the distance/displacement of precipitation in the forecasts, representing the whole domain. In the real world, in a large domain encompassing multiple geographical regions with different climatological characteristics, the typical displacement errors will likely differ in each individual region, so identifying localized estimates of errors would be more meaningful. Many spatial verification methods also have a hard time dealing with global domains. It is either hard to use them in such a way to properly account for the spherical geometry of a global domain, or the computation time in spherical geometry increases so much that it makes them difficult to be used. Luckily PAD can be adapted to provide localized verification information and can also be modified for use in a global domain without a significant increase in computation time. We analyzed the behavior of the adapted metric on various idealized and real-world examples, which show that it provides meaningful localized verification results and properly accounts for the spherical geometry of the Earth in a global domain.

How to cite: Skok, G.: Using the Precipitation Attribution Distance for localized verification and in a global domain, EMS Annual Meeting 2023, Bratislava, Slovakia, 4–8 Sep 2023, EMS2023-243, https://doi.org/10.5194/ems2023-243, 2023.

P15
|
EMS2023-382
Sebastian Schlögl, Caspar Wenzel, and Karl Gutbrod

The numerical weather forecast has been continuously improved within the last decades due to a) improved subgrid parameterisations, b) more precise initial conditions (e.g., satellite imagery) and c) more computational power allowing a finer grid resolution.  

Weather forecast providers typically rely on raw weather forecast models on the market in combination with own developed post-processing routines including e.g. weather stations and AI techniques to increase the quality of the forecast.  

In this study the weather forecast of seven different weather forecast providers was verified at 500 measurement locations worldwide for the year 2022. The analysis was conducted for air temperature, precipitation, wind speed and direction and air pressure for the forecast period 1 – 6 days. Forecast data were verified against METAR measurements in hourly temporal resolution.  

Error metrics such as mean absolute error (MAE), mean bias error (MBE), Pearson correlation, Heidke skill score (HSS), Probability of detection (POD) and False alarm rate (FAR) have been calculated for all weather forecast providers. 

The MAE for the 12 – 35 h air temperature forecast of the seven different weather forecast providers ranges between 1.26 K and 2.06 K, whereas three providers showed values below 1.5 K. The percentage of MAE values lower than 1.5 K in the 1-day forecast ranges between 67 % and 35 %, showing a high difference in the post-processing routines of the different weather forecast provider.  

The forecast quality is lowered with increasing forecast horizons due to the physical limitations of the weather forecast. For example, the best provider for the 12 – 35 h air temperature forecast showed a MAE of 1.26 K for the first day, 1.34 for the second day and 1.43 for the third day. This results additionally shows that the 3-day air temperature forecast of the best provider is even better than the 1-day air temperature forecast of 5 (out of 7) analysed weather forecast providers.  

The results in this study show a broad quality of weather forecast providers, indicating that some providers rely on simple raw weather forecast models, whereas other provider lower the errors by post processing routines based on AI techniques and additional weather stations.  

How to cite: Schlögl, S., Wenzel, C., and Gutbrod, K.: Verification of the weather forecast of seven weather forecast providers for 500 locations worldwide, EMS Annual Meeting 2023, Bratislava, Slovakia, 4–8 Sep 2023, EMS2023-382, https://doi.org/10.5194/ems2023-382, 2023.