OSA1.7 | Machine Learning in Weather and Climate
Machine Learning in Weather and Climate
Conveners: Richard Müller, Gordon Pipa, Bernhard Reichert, Dennis Schulze, Gert-Jan Steeneveld, Roope Tervo
Orals
| Mon, 02 Sep, 09:00–16:00 (CEST)|Aula Magna
Posters
| Attendance Tue, 03 Sep, 18:00–19:30 (CEST) | Display Mon, 02 Sep, 08:30–Tue, 03 Sep, 19:30
Orals |
Mon, 09:00
Tue, 18:00
Artificial Intelligence (AI) is revolutionizing the weather-prediction value chain and becoming a key technology for all climate-related sciences. This session focuses on machine learning techniques and aims at bringing together research with weather and climate-related background with relevant contributions from computer sciences using these techniques.

Contributions from all kinds of machine learning studies in weather and climate are encouraged, including but not limited to:

* Global and local weather prediction, including both NWP emulators and training the model directly from observations
* Postprocessing of Numerical Weather Prediction (NWP) data
* Nowcasting studies, studies using satellite data, radar data, and observational weather data
* Seasonal forecasts
* Climate-related studies, including dimensionality reduction of weather and climate data, extraction of relevant features
* Operational frameworks (MLOps), cloud ecosystems, and data flows related to the AI
* Benchmark datasets and validation of the model outputs
* Quantifying the impacts of weather and climate, connecting meteorological data with non-meteorological datasets
* Human aspect -- how AI changes our work, organisations, and culture?

Orals: Mon, 2 Sep | Aula Magna

Chairpersons: Bernhard Reichert, Dennis Schulze
09:00–09:15
|
EMS2024-164
|
Onsite presentation
Pantelis Georgiades, Theo Economou, Yiannis Proestos, Jose Araya, Jos Lelieveld, and Marco Neira

Climate change presents challenges across various facets of life, significantly impacting both human and animal welfare. In agriculture, livestock farming stands out as a sector highly vulnerable to environmental stressors. This vulnerability necessitates the effective assessment and management of climate impacts to ensure the sustainability of agricultural productivity and livelihoods. Dairy farming, a crucial segment of the livestock industry, is notably sensitive to climatic variations. In the United States alone, economic repercussions from heat stress on dairy cattle are estimated to range between $1.5 and $1.7 billion annually.

The susceptibility of dairy cattle to climate conditions is influenced by the nexus of interactions among environmental elements, especially temperature and humidity, and biological parameters. This is compounded by the fact that contemporary breeds have been extensively genetically selected with a focus on maximizing milk production. The Temperature Humidity Index (THI), a straightforward and non-invasive measure, has been developed to estimate the level of thermal stress exerted on cattle by the aggregate impact of temperature and humidity. Calculating THI requires readily accessible climatic data, such as air temperature and relative humidity. The correlation of THI with physiological parameters of cattle has been extensively validated in the scientific literature.

Traditionally, THI values have been estimated using data on a daily basis due to the logistical and computational challenges associated with handling large datasets required for more refined temporal resolutions and the coarse temporal resolution of data from conventional climate models. However, daily-level estimations fall short to accurately capture the dynamic nature of thermal loads within a day or the cumulative effects over successive days, especially when night-time conditions do not facilitate effective heat dissipation.

To overcome these limitations, our study adopts an innovative approach by employing the Extreme Gradient Boost (XGBoost) machine learning algorithm for temporal interpolation (downscaling) daily climate projections to hourly THI values. Utilizing the ERA5 reanalysis dataset for model training, which includes historical hourly data, we applied the model to generate hourly THI projections up to the century’s end. These projections, based on NASA NEX-GDDP-CMIP6 datasets, include twelve climate models and two Shared Socioeconomic Pathways (SSPs): SSP2-45 and SSP5-85, representing moderate and high-emissions scenarios, respectively.

Through our analysis, we identified regions poised to be significantly impacted by climate change, where the implementation of mitigation strategies is critical to safeguarding animal welfare and minimizing economic losses stemming from reduced production and quality deterioration. Our study emphasizes the importance of developing and applying effective measures to reduce the impact of climate change on dairy farming, which is essential for improving resilience and sustainability in agriculture worldwide.

How to cite: Georgiades, P., Economou, T., Proestos, Y., Araya, J., Lelieveld, J., and Neira, M.: Future-Proofing Dairy Farms: Hourly Heat Stress Predictions with Machine Learning, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-164, https://doi.org/10.5194/ems2024-164, 2024.

09:15–09:30
|
EMS2024-167
|
Onsite presentation
Kaisa Ylinen and Andreas Tack

A machine learning based bias correction for MetCoOp temperature, wind speed and wind gust forecasts has been implemented at the Finnish Meteorological Institute (FMI). The MetCoOp model, with a horizontal resolution of 2.5 km, serves as the primary source of short-range forecasts at FMI. However, like many numerical weather prediction models, it suffers from systematic errors that can negatively affect forecast accuracy. To address this issue, we utilize eXtreme Gradient Boosting (XGB) to correct these systematic errors in wind and temperature forecasts. Although XGB effectively reduces the mean absolute error of the forecasts, it tends to underestimate high wind speeds. To mitigate this limitation, we additionally apply quantile mapping (QM) bias-correction technique for wind speed and wind gust forecasts.

Our model uses 2.5 years of training data containing various weather parameters as predictors, including time-lagged variables. Also station and time dependent features such as forecast hour and month, are used. The model is trained to predict forecast errors at station points and subsequently gridded to the original resolution using the gridpp method. The bias correction information is then applied to the original MetCoOp forecast fields of 10-m wind speed, wind gust, and 2-m temperature, resulting in more accurate forecast fields.

The machine learning-based model is currently in pre-operational use at FMI, with the primary aim of reducing the manual editing traditionally performed by meteorologists to correct forecast errors. Preliminary verification results indicate that bias-corrected forecasts have smaller errors on average compared to uncorrected MetCoOp forecasts. Importantly, the model improves the accuracy of wind alerts and warnings, thereby providing significant value in critical decision-making situations.

How to cite: Ylinen, K. and Tack, A.: Improving wind and temperature forecast accuracy using eXtreme Gradient Boosting and Quantile Mapping, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-167, https://doi.org/10.5194/ems2024-167, 2024.

09:30–09:45
|
EMS2024-192
|
Onsite presentation
Çağlar Küçük, Aitor Atencia, and Markus Dabernig

Precipitation nowcasting remains a challenging topic in weather prediction, particularly in the initial stages of convective activity. Ground-based weather radar observations have generally been used to estimate the motion vectors of precipitation fields, and have a pivotal role in precipitation nowcasting. However, during convection initiation, such data hold limited information, which hampers prediction performance. Nevertheless, different data streams, such as lightning activity and geostationary satellite infrared channels, have demonstrated skill in the early detection of convective activity. Therefore, there is a need to integrate data from various domains to improve nowcasting of convective precipitation, and data-driven approaches offer robust solutions for integrating large volumes of data and extracting the information therein. 

Here, we present a Transformer-based precipitation nowcasting model that integrates data from various sources. To train the model, we created a dataset by harmonising space- and ground-based observations with precipitation reanalysis and convective information data from the Integrated Nowcasting through Comprehensive Analysis (INCA) model over the spatial domain of INCA. Space-based observations include four infrared channels of the Meteosat Second Generation, while ground-based observations comprise lightning and weather radar data with 5-minute temporal resolution and a spatial resolution varying from 1 to 8 kilometres depending on the data source. The data is sampled over 5 years of observations from convective seasons to target convective precipitation events, which are particularly challenging for prediction. Trained on this dataset, our model can nowcast precipitation over the INCA domain for a lead time of 90 minutes. While the model reproduces the shape and location of the fields, the performance in reproducing the structure of the precipitation fields is limited at longer lead times, resulting in blurred predictions.

We will present the model, analyse its performance, present case studies, compare it against the operational INCA predictions, and provide insights through model interpretation experiments. We will also elaborate on the details of developing the dataset, a critical but often underrated step for enhancing model performance. The model offers a novel approach to integrated nowcasting of convective precipitation and motivates further studies with a data-driven perspective. 

How to cite: Küçük, Ç., Atencia, A., and Dabernig, M.: Integrated nowcasting of convective precipitation with Transformer-based models , EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-192, https://doi.org/10.5194/ems2024-192, 2024.

09:45–10:00
|
EMS2024-318
|
Onsite presentation
Adrienn Varga-Balogh, Ádám Leelőssy, László Varga, and Róbert Mészáros

This study investigates the applicability of the recently released (2023) GraphCast artificial intelligence (AI) model for weather forecasting in Hungary. We assess the model's accuracy in forecasting for grid data at various points across Hungary for a case study of crossing of a cold frontal zone in March 2023. We compared GraphCast predictions with observations and established weather models to evaluate its potential for weather forecasting.

GraphCast is a machine learning system for weather forecast developed by DeepMind. Unlike traditional weather models that rely on complex physical equations, GraphCast leverages historical weather data to identify patterns and relationships between different locations. By feeding the model with data from two time points (6 hours apart), up to 7-day GraphCast predictions were made with 6-hour temporal and 1° spatial resolution. Model predictions were compared to the data of the nearest weather station.

Our analysis involves a detailed comparison between modeled outputs from GraphCast and actual measurement data series collected from Hungarian meteorological measurement stations, ERA5 reanalysis and the operational weather forecasts of the GFS numerical weather prediction model. The purpose of this comparison is to quantify the effectiveness of the model in reproducing grid point data across various time scales. To strengthen the generalizability of our findings, we explore the sensitivity of the model's predictions on different initialization times and forecast lengths overlapping the arrival of the cold frontal zone.

The findings from this research will contribute to the ongoing evaluation of AI-based predictive models. By analyzing GraphCast's performance in a case study, we can determine its potential for application in weather forecast.

The research was funded by the National Multidisciplinary Laboratory for Climate Change, RRF-2.3.1-21-2022-00014 project.

How to cite: Varga-Balogh, A., Leelőssy, Á., Varga, L., and Mészáros, R.: A Case Study for the Application of the GraphCast AI Model in Hungary, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-318, https://doi.org/10.5194/ems2024-318, 2024.

10:00–10:15
|
EMS2024-326
|
Onsite presentation
Alberto Sanchez-Marroquin, Jordi Barcons Roca, Omjyoti Dutta, Iciar Guerrero Calzas, Lorenzo Rossetto, Mirta Rodriguez Pinilla, and Fernando Cucchietti

Atmospheric deep convection can occur when the warming of the Earth’s surface
by solar radiation leads to buoyant plumes that break through the mixed layer and
produce vertical clouds reaching the tropopause. This phenomenon is associated with
thunderstorms, heavy precipitation, hail, strong winds and other events that cause se-
vere damage to life and property. However, representing deep convection and its as-
sociated events in models is challenging as they depend on many high-resolution sub-
grid processes which are difficult and expensive to simulate. As a consequence, some
approaches based on artificial intelligence and specially Machine Learning (ML) have
recently emerged to bypass some of these limitations of physical models. Here we discuss
some of the ML methodologies implemented in the Convective Day Detector (CDD), a
statistical model designed to identify hazardous convective events at ground level based
on ERA5 reanalysis data.
First, we will describe the CDD, which is a ML classifier based on meteorological
variables from ERA5 reanalysis which are associated with deep convection, such as
convective available potential energy, vertical wind velocity or specific humidity. The
CDD is trained to find the relationship between these variables and the occurrence of
severe weather events such as hailstorms and severe wind from observation-based reports
databases. The trained CDD is subsequently employed to infer the probability of the
occurrence of these convective events beyond the the training region, where observations
are more limited or inconsistent, if available at all.
However, this modelling approach presents many challenges that need to be over-
come. To start with, hazardous convective events are rare and difficult to measure in a
consistent manner. This leads to a very unbalanced training dataset, with many posi-
tive unlabelled data. Therefore, we will discuss some ways to address these problems,
such as under sampling, artificially filtering the storm database or positive unlabelled
learning methodologies. Additionally, the meteorological conditions that lead to the
development of convective events are different depending on the location. As a conse-
quence, we will also discuss transfer learning methodologies to apply a classifier trained
in North America to different regions of the world such as Europe, and how to validate
the results with very scarce and inconsistent observations.

How to cite: Sanchez-Marroquin, A., Barcons Roca, J., Dutta, O., Guerrero Calzas, I., Rossetto, L., Rodriguez Pinilla, M., and Cucchietti, F.: Machine Learning methodologies for identifying atmospheric deep convective events in ERA5, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-326, https://doi.org/10.5194/ems2024-326, 2024.

10:15–10:30
|
EMS2024-333
|
Onsite presentation
Athul Rasheeda Satheesh, Peter Knippertz, and Andreas Fink

Numerical Weather Prediction (NWP) models generally underperform compared to simpler climatology-based precipitation forecasts in northern Tropical Africa—in some regions, even after statistical postprocessing. Recently developed Artificial Intelligence (AI) weather models show promise in forecasting various meteorological variables, but so far, they largely avoid precipitation forecasts due to its complex nature. However, recent studies have demonstrated the efficacy of a logistic regression model trained on past days' rainfall data to predict daily rainfall occurrences, outperforming NWP ensemble forecasts by leveraging coherent rainfall patterns driven by synoptic-scale forcings like African Easterly Waves (AEWs). AEWs and other tropical waves play a crucial role in modulating synoptic-scale rainfall in tropical Africa, yet their explicit utilization for predicting daily rainfall amounts remains unexplored.

The present study addresses this gap by employing two machine-learning (ML) models—gamma regression and convolutional neural network (CNN)—trained solely on tropical wave predictors derived from satellite-based gridded precipitation data from Global Precipitation Measurement Integrated Multi-satellite Retrievals (GPM IMERG) to predict daily rainfall amounts. The predictor variables are computed from the local amplitude and phase information of seven types of tropical waves at the target and neighbouring grid points at 1° spatial resolution. The ML models are combined with the recently introduced Easy Uncertainty Quantification (EasyUQ) method to generate calibrated probabilistic forecasts, which are then compared with three benchmarks: a climatology-based forecast (Extended Probabilistic Climatology- EPC15), the European Centre for Medium-Range Weather Forecasts (ECMWF) operational ensemble forecast (ENS), and a probabilistic forecast derived from the ENS control member using EasyUQ (ENS EasyUQ). Our findings reveal that the ENS forecast exhibits poor skill relative to the EPC15 forecast across most parts of tropical Africa, primarily due to high miscalibration. While the ENS EasyUQ forecast shows considerable improvement over the ENS forecast, only marginal enhancement is achieved compared to the EPC15 forecast over land regions. In contrast, both gamma regression and CNN forecasts significantly outperform the benchmarks in most areas of tropical Africa and fail to achieve statistical significance only in the arid regions in the far North and over the equatorial Atlantic Ocean. Overall, the present study highlights the potential of ML models trained solely on tropical wave predictors to enhance daily precipitation forecasting in tropical Africa, offering valuable insights for improving operational forecasting systems in the region.

How to cite: Rasheeda Satheesh, A., Knippertz, P., and Fink, A.: A skilful 24-hour rainfall forecast for Africa based only on tropical wave observations, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-333, https://doi.org/10.5194/ems2024-333, 2024.

Coffee break
Chairpersons: Bernhard Reichert, Roope Tervo
11:00–11:15
|
EMS2024-368
|
Onsite presentation
Killian Pujol--Nicolas, Roberta Baggio, Jean-Baptiste Filippi, Dominique Lambert, Jean-François Muzy, and Florian Pantillon

Heavy Precipitation Events (HPE) can cause significant human fatalities and material damages. Therefore, their prediction is crucial but challenging due to the complex processes involved. In this context, artificial intelligence methods have recently been shown to be competitive with state-of-the-art Numerical Weather Prediction (NWP). Our work focuses on improving the prediction of the occurrence of HPE based on Neural Network (NN) models and using both observation and NWP data.

We use the MeteoNet open source database from Meteo-France on northwestern and southeastern France from 2016–2018 including station observations (OBS) and forecasts from the NWP models Arome and Arpege. We train a NN model to predict the occurrence of daily rainfall above a threshold of 10 mm / 24 h at the location of the stations. Our verification metric is the Peirce Skill Score (PSS) with Arome forecasts as a benchmark.

Our results for both northwestern and southeastern regions are 1) the NN model using both OBS and NWP data as inputs has the highest PSS, 2) the NN model using only Arome data as input has higher PSS than the benchmark, 3) the NN model trained only with OBS data has lower PSS than the benchmark, showing the crucial contribution of NWP forecast data at a lead time of 24 h, and 4) due to the rarity of rainfall events meeting the threshold, training the NN model with a weighted loss function significantly increases the PSS.

When extending the results to shorter time scales, we find that the contribution of OBS data to the NN model is dominant at 1–3 h lead times, while including NWP forecast data allows to mitigate the degradation of prediction skill with longer lead time. Finally, the results for northwestern France show slightly lower PSS than for southeastern France due to the different rainfall climatology.

How to cite: Pujol--Nicolas, K., Baggio, R., Filippi, J.-B., Lambert, D., Muzy, J.-F., and Pantillon, F.: Improving prediction of heavy rainfall with Neural Networks using both observation and Numerical Weather Prediction data, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-368, https://doi.org/10.5194/ems2024-368, 2024.

11:15–11:30
|
EMS2024-451
|
Onsite presentation
Roman Attinger, Gabriela Aznar-Siguán, Hélène Barras, Johannes Landmann, Jivan Waber, Kathrin Wehrli, and Szilvia Exterde

Adverse weather conditions substantially affect aviation operations. In 2023, weather accounted for the largest fraction of en-route air traffic delays in the European network [1]. Alongside low visibility and strong winds, thunderstorms are one of the main causes for these delays. This is exacerbated during the summer months when both convective activity and air traffic demands are high. To anticipate the adverse effect of thunderstorms on air traffic, accurate information on the location and timing of convective initiation as well as on the duration of convective activity are required. However, convective developments at time-scales greater than the nowcasting range still pose a great challenge even for convection-resolving numerical weather prediction (NWP) models.

To support air traffic management, MeteoSwiss is developing a range of novel products in close collaboration with the Swiss air navigation service provider and airports. Specifically, machine learning (ML) based approaches to forecast wind, visibility [2], and thunderstorms are developed that exploit the full potential of NWP data and observations. Moreover, solutions to improve the interpretability and usefulness of these probabilistic forecasts are implemented [3].

The presented work provides probabilistic thunderstorm predictions together with an estimation of cloud top height up to 33 hours in advance. The icosahedral non-hydrostatic (ICON) model, which is the newly operational convection-resolving ensemble prediction system at MeteoSwiss, forms the data basis of the approach. Thunderstorm probabilities are derived from relevant NWP parameters using convolutional neural networks (CNNs) as they are highly efficient in identifying spatial relationships in data. We train both a U-Net and fully connected ResNet50 model on ICON re-forecast data from the convective seasons of the previous three years. The objective is defined as a binary classification problem using ground-based lightning observations as the target variable. Predictions are provided in the greater Alpine region and are updated in accordance with the operational NWP system every 3 hours.

We discuss the feature selection procedure for which we use a tree-based ML model to identify the most meaningful model ensemble statistics. We compare the performance in terms of skill and reliability of the CNN approaches with the direct model output of ICON for different lead times. Finally, insights and challenges regarding the use of the new product in operational air traffic flow and capacity management are presented.

[1] https://www.eurocontrol.int/publication/performance-review-report-prr-2023-consultation
[2] abstract Wehrli et al., 2024, submitted to EMS 2024.
[3] abstract Landmann et al., 2024, submitted to EMS 2024.

How to cite: Attinger, R., Aznar-Siguán, G., Barras, H., Landmann, J., Waber, J., Wehrli, K., and Exterde, S.: Thunderstorm prediction using convolutional neural networks to support air traffic management in Switzerland, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-451, https://doi.org/10.5194/ems2024-451, 2024.

11:30–11:45
|
EMS2024-711
|
Online presentation
Irene Schicker, Petrina Papazek, Pascal Gfäller, Iris Odak Plenkovic, Ivan Vujec, Alexander Kann, and Kristian Horvath

With the increasing amount of wind and solar energy fed into the European power

grid, despite slowdowns due to social acceptance and regulatory issues, plus the

transition to a fossil fuel-free energy production, accurate predictions including

uncertainties are required for grid operators. Relying heavily on renewable energy

sources, frequently updated and as accurate as possible predictions for both

high-resolution temporal and spatial scales ensure grid management and taking

prevention measures in case of extreme meteorological events affecting the power

production. Moreover, both extreme events in weather across the nowcasting to weeks ahead

time scale and combined non-necessarily extreme weather factors can notably affect the power production grid. The latter is sometimes a combination of just-above-normal events, such as  high solar penetration, high wind penetration plus

decent hydropower combined with a reduced electricity demand. Thus, on-demand

available predictions of the expected power production are especially needed. Post-processing

methods enable targeted forecasts of meteorological parameters at site-location and

regional level, serving as a baseline for the conversion to power production,

particularly a direct conversion of NWP predictions and observations to power

production.

 

 

However, so far most methods available use NWP forecast with spatial resolutions

between 2 and 9 km, and hourly output frequency thus requiring temporal interpolation and

spatial downscaling or point interpolation. Recently, NWP upgrades

towards sub-km scale and sub-hourly have been done to improve extreme events prediction. These NWP models are only

available for selected extremes and short periods of time and often do not share the

same parametrization throughout. This poses a challenge for post-processing for

renewables, as well as the uncertainties that lie in e.g. wind farm

specifications and solar farm/PV panels. To prepare for the next phase of NWP

models, and currently developing on-demand extreme digital twin forecasting system, fast and

transferable post-processing methods are needed. Here, we look into different

machine learning and classical statistical methods, such as the analog method,

LSTMs and Random forest, EMOS, to generate post-processed forecasts for extreme

events with a sparse training database. Furthermore, we investigate the

transferability and generalisability of these methods when pre-trained with a

coarser NWP model.

 

How to cite: Schicker, I., Papazek, P., Gfäller, P., Odak Plenkovic, I., Vujec, I., Kann, A., and Horvath, K.: Post-processing for wind and PV power production of hectometric NWP forecasts - which Machine Learning methods are beneficial for sparse data and extreme events?, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-711, https://doi.org/10.5194/ems2024-711, 2024.

11:45–12:00
|
EMS2024-889
|
Onsite presentation
Nikolaos Antonoglou, Manuel Werner, Ulrich Blahak, and Kathleen Helmert

Weather radar serves as a critical tool for effectively monitoring precipitation patterns and predicting severe weather events. In recent years, X-band radar systems have gained prominence due to their exceptional spatial resolution, providing detailed insights into convective systems and localized precipitation features. However, their limited coverage area and higher attenuation rates pose challenges for comprehensive meteorological analysis. In contrast, C-band radar, commonly utilized by meteorological agencies, offer broader coverage but with reduced resolution, limiting its ability to capture finer-scale weather phenomena effectively.

The German Weather Service (Deutscher Wetterdienst – DWD) operates a network of 17 dual-polarization C-band radars and aims at installing four additional X-band systems. This study focuses on the homogenization of multiple-frequency observations. Our approach involves employing machine learning algorithms to transform X-band radar data into a format analogous to C-band observations. By integrating machine learning into the mapping process, we aim to enhance the utility of X-band radar data for broader meteorological applications within the framework of established C-band meteorological services. Challenges addressed in this research include accurately scaling reflectivity measurements, mitigating attenuation effects at different frequencies, and validating the mapped data against ground-based disdrometer observations to ensure reliability and accuracy.

This mapping is mandatory for the integration of the future radar observations in the processing chain of the DWD. Until the installation of the systems is finished, we utilize measurements from the Low-Level Wind Shear Alert Systems (LLWAS) in the international airports of Frankfurt and Munich, which are also X-band. Moreover, we generate transformation equations using all principal moments (e.g. reflectivity, differential reflectivity, specific differential phase, etc.).

How to cite: Antonoglou, N., Werner, M., Blahak, U., and Helmert, K.: Mapping X-band Weather Radar Observations to C-band for Homegeneous Meteorological Analysis, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-889, https://doi.org/10.5194/ems2024-889, 2024.

12:00–12:15
|
EMS2024-820
|
Onsite presentation
Olav Ersland, Thomas Nils Nipen, and Ivar Ambjørn Seierstad

We introduce a neural network for predicting precipitation phases across Norway. This method is trained on observation data from 52 airports throughout the country, with data from 3 winter seasons.

 

One of our main concerns is to have a method that works well in operation on Yr.no. It needs to be fast, and give results that are intuitive, so that it is easy to judge whether the method gives reasonable predictions.

 

The neural network uses predictors from the MetCoOp Ensemble Prediction System (MEPS) that forecasts weather across Norway. We use three predictor variables: air temperature at 1500 meters above the ground, air temperature at 200 meters above the ground, and the wet bulb temperature at 2m above the ground. The model output is the probability of snow for any given point in time and space, given that there is precipitation present.

 

The problem itself is a classification problem, so we have used a TensorFlow model with a sigmoid activation function in the output layer, together with a binary cross entropy loss function. Other than that, we have tested different configurations of hidden layers. It turns out that a simple configuration performs well and is robust with respect to overfitting.

 

 

Until now, MET Norway has used a simple threshold based method in operation, which is based on the wet bulb temperature at 2m height above the ground. This neural network approach offers a simple and data driven alternative to this, and improves the forecast in difficult weather scenarios.

 

To evaluate the method, we apply metrics such as accuracy and brier score, to compare our results with former methods.

How to cite: Ersland, O., Nipen, T. N., and Seierstad, I. A.: Precipitation Phase Prediction in Norway Using Neural Networks, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-820, https://doi.org/10.5194/ems2024-820, 2024.

12:15–12:30
|
EMS2024-849
|
Onsite presentation
Ivan Vujec, Iris Odak Plenković, Irene Schicker, and Jakov Lozuk

NWP models have been crucial for modern weather forecasting for a long time. Although their skill is improving, the errors they exhibit can still be substantial. The additional forecast improvement is often obtained by applying statistical and machine learning (ML) post-processing techniques, especially for the locations where the measurements are available. Post-processing of wind speed using the analog-based method has been successfully implemented and analyzed at DHMZ for a long time. But considering the relatively recent surge of various other machine learning techniques, the next logical step is to combine  the analog method  with them to bring further improvements. In this work, we are trying to determine whether the output of the analog method can successfully be used as an input of deep-learning and gradient-boosting methods in order to benefit from both approaches.

The raw NWP model used in this work is the ALADIN model with 2 and 4 km horizontal resolutions. The analog-based method is first applied to raw NWP, where the method includes weight optimization and the correction for more extreme values, as well as the novel variation with varying ensemble sizes. Then, the deep-learning and gradient-boosting methods are fed with both analog ensemble and raw NWP. Besides using all the analog ensemble members, the analog ensemble can also be characterized by the descriptive values, which therefore reduces the number of machine learning predictors. The forecasts are verified against the wind speed measurements across the Republic of Croatia.

The continuous and categorical approaches are used for the verification of the hourly wind speed forecasts. In the categorical approach, verification is also performed for both common and more extreme events. Additionally, verification is also performed for different types of stations. Results show a clear benefit of using the analog method as a generator of additional ML predictors to raw NWP.  The exact improvement is dependent on the type of predictors used in the process. Finally, the results can be fine-tuned, depending on the main goal. Since the extreme events are particularly hard to predict, the variation which is able to further improve the performance for such events is emphasized.

In conclusion, this work demonstrates the potential of integrating analog methods with machine learning techniques to improve wind speed forecasting. By combining the strengths of both approaches, the enhancements in forecast accuracy are achieved, even for extreme events. These findings underscore the importance of exploring novel methodologies to advance weather prediction capabilities and mitigate the impact of severe weather events.

How to cite: Vujec, I., Odak Plenković, I., Schicker, I., and Lozuk, J.: Integrating Analog Methods with Other Machine Learning Techniques to Further Enhance Wind Speed Forecasting, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-849, https://doi.org/10.5194/ems2024-849, 2024.

12:30–12:45
|
EMS2024-870
|
Onsite presentation
Kathrin Wehrli, Roman Attinger, Hélène Barras, Johannes Marian Landmann, Gabriela Aznar-Siguan, Szilvia Exterde, Melanie Irrgang, Thomas Jordi, Thomas Reiniger, and Claudia Stocker

Airport operations, including runway capacity, delays, and aircraft handling, are tightly coupled to weather. Taking preparatory actions for different weather conditions such as changing wind regimes, visibility reductions, and thunderstorms, is a central part of air traffic management. This meteorological predictions that enhance situational awareness and increase plannability.

Within the MeteoSwiss AVIA26 project, new aviation weather products are developed to forecast thunderstorms [1], wind, and visibility conditions using a machine learning (ML) approach. They provide predictions with high timeliness, short update cycles, spatial representativeness, and information on occurrence probability. Insights into how the predictions are visualized, disseminated and communicated to stakeholders in order to support decision making are given by Landmann et al. [2].

In this contribution, we will focus on the development of an ML-based time series model for predicting visibility at Zurich Airport. We employ the Temporal Fusion Transformer (TFT) model [3], which is an interpretable, multi-horizon, attention-based transformer model for time series. Probabilistic visibility predictions are generated at a 10-minute resolution for the first three hours, and an hourly resolution thereafter up to a lead time of 33 hours. The predictors include visibility measurements, other meteorological station measurements, remote sensing data from satellite, and numerical weather model output. Thanks to the near real-time measurements at and near the airport, predictions can be updated every 10 minutes to reflect the ongoing meteorological tendencies. Both vertical and horizontal visibility time series are predicted simultaneously, resulting in consistent information on visibility regimes at the airport.

We investigate the importance of different predictors and optimize the ML model architecture, considering also tree-based models to test the validity of the TFT model for the use case. The performance of the ML-based prediction is compared against the current deterministic forecast, which is generated from post-processed numerical weather prediction and has an hourly granularity. We find a better performance of the ML-based prediction, particularly for relevant visibility thresholds for aviation. Furthermore, the faster update frequency and probabilistic character make it more helpful for planning and decision making in air traffic management.

 

[1] abstract Attinger et al., 2024, submitted to EMS 2024

[2] abstract Landmann et al., 2024, submitted to EMS 2024.

[3] Lim et al., 2021, https://doi.org/10.1016/j.ijforecast.2021.03.012

How to cite: Wehrli, K., Attinger, R., Barras, H., Landmann, J. M., Aznar-Siguan, G., Exterde, S., Irrgang, M., Jordi, T., Reiniger, T., and Stocker, C.: Using machine learning to enhance visibility predictions at Zurich Airport, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-870, https://doi.org/10.5194/ems2024-870, 2024.

12:45–13:00
|
EMS2024-955
|
Onsite presentation
Gerrit Hein

Increasing amounts of energy need to be transported through the electric grids leading to congestion and load-shedding.

Dynamic line rating (DLR) is a cost-effective method that allows transmission system operators to increase the current carrying capacities of electric grids beyond fixed limitations. This helps to reduce congestion and load-shedding. DLR considers environmental conditions, such as the weather, to ensure the safe and optimal operation of the grid. By implementing DLR, the grid can effectively transport increasing amounts of energy while minimizing the risk of disruptions.

Line ratings are commonly determined using meteorological data collected from weather stations near transformer stations. While these measurements offer localized insights into prevailing conditions, they may suffer from sensor inaccuracies and can be insufficient given the extensive distribution network area. Consequently, supplementary data from advanced weather models like ICON-D2 is utilized to offer a comprehensive overview of day-ahead weather conditions spanning the grids’s topology. However, despite the D2 model’s resolution of approximately 2.2 km, its granularity may prove inadequate for capturing nuanced, small-scale variations critical for predicting extreme values.

This study sought to investigate whether machine learning techniques could leverage the advantages of both traditional forecasting methods and modern data-driven approaches to deliver accurate predictions at the station level.

Our approach employs a dual two-step LSTM prediction methodology. Initially, GIS data such as fractional land cover or a topographic position index (TPI) are integrated with climatological information to generate forecasts for the target time series. Subsequently, the output of the first LSTM network undergoes a second training loop, where actual weather forecasts from the weather model are incorporated and aligned with the measurement data.

Our focus primarily centered on properties like wind speed and temperature, given their greater influence on the heating and cooling of power lines. We explored various network configurations and experimented with different initialization schemes, facilitating adjustments for extreme values to enhance balance within the system.

We conducted a comparative analysis by juxtaposing our predictions with baseline outcomes derived from the error between day-ahead forecasts generated by the weather model at the weather station. The results revealed an improvement, showcasing an 11% reduction in Root Mean Square Error (RMSE) across the board.

Our findings demonstrate the robust efficacy of our method, presenting substantial enhancements with minimal preprocessing and training requirements. This resilience ensures uninterrupted network operation, even in scenarios where stations may fail or be unavailable at specific grid points. Ultimately, our approach contributes to boosting the current carrying capacity within the power grid. By enabling more accurate assessment of future meteorological conditions, our method facilitates improved planning and optimization of energy transportation, thereby enhancing grid reliability and efficiency.

How to cite: Hein, G.: Enhancing day ahead point forecasts with additional GIS information and a sequential dual LSTM approach, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-955, https://doi.org/10.5194/ems2024-955, 2024.

Lunch break
Chairperson: Roope Tervo
14:00–14:15
|
EMS2024-959
|
Onsite presentation
Matej Choma, Matej Murín, Jakub Bartel, Milly Troller, Petr Šimánek, and Michal Najman

Effective short-term weather forecasting is vital for informed decision-making during severe weather events to mitigate their impact. Traditional numerical weather prediction (NWP) models often face challenges in accurately predicting rapidly evolving weather phenomena. This study introduces an innovative approach that uses artificial intelligence (AI) to post-process NWP forecasts for the near future with respect to the latest available weather measurements. In the scope of this work, our solution leverages real-time synoptic scale meteorological station measurements, radar reflectivity data, and satellite imagery to post-process Global Forecast System (GFS) predictions for the Central Europe area. By fusing these diverse data sources, both the accuracy and resolution of the input GFS predictions are enhanced, offering an increase in prediction step resolution from 3 hours to 1 hour and an update of the forecasts with the most recent measurements every 30 minutes. Our solution internally uses a deep neural network trained to post-process GFS predictions to mimic ERA5 reanalysis as closely as possible. The predicted variables are total accumulated precipitation, temperature 2 meters above the ground, and wind gusts. However, in theory, the presented approach is not limited to the abovementioned set of input data or target variables. The model achieves up to 2.5 times lower mean absolute error compared to baseline forecasts, showcasing its effectiveness in capturing real-time weather dynamics. Moreover, the model exhibits the capability for rapid updates as new weather measurements become available, continuously refining predictions. This dynamic adaptability ensures that forecasts remain relevant and accurate, even in rapidly changing weather conditions. Alongside the quantitative evaluation against the ERA5 data, we will present a case study showcasing the usefulness of the post-processed forecasts in specific weather situations.

How to cite: Choma, M., Murín, M., Bartel, J., Troller, M., Šimánek, P., and Najman, M.: Rapid Update NWP Postprocessing with AI and Real-Time Measurements, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-959, https://doi.org/10.5194/ems2024-959, 2024.

14:15–14:30
|
EMS2024-1097
|
Onsite presentation
Icíar Lloréns Jover, Francesco Zanetta, Francesco Isotta, Daniele Nerini, Christian M. Grams, and Cornelia Schwierz

This talk addresses the challenge of generating high-resolution wind climatology maps for Switzerland, a region characterized by sparse measurement stations and complex mountainous terrain. Accurately mapping wind patterns in such areas is inherently difficult due to the nonlinear and rapidly changing nature of wind flow, compounded by diverse and abrupt relief. Existing numerical models also lack the spatial granularity and temporal resolution necessary for comprehensive wind mapping.

Our objective is twofold. Firstly, we aim to create more detailed wind maps that closely align with observations. Leveraging comprehensive orography maps, topographic descriptors, and numerical model outputs, we seek to downscale wind maps to achieve finer spatial resolution. Secondly, we intend to utilize these downscaled wind maps to compute wind climatology maps, focusing on maximum wind gusts and mean wind velocity, both hourly and daily. These wind climatology maps are invaluable for a plethora of stakeholders such as renewable energy planning, infrastructure development, and environmental monitoring, aiding in informed resource allocation and decision-making.

We propose employing machine learning techniques, specifically Gaussian Processes (GPs) and Neural Processes (NPs), to address these challenges. Both present a compelling approach for wind map downscaling as both learn from small datasets and sparse observations to interpolate wind data and generalize to unseen locations. GPs offer a probabilistic framework to model complex relationships between inputs and outputs. By pairing sparse observations, high-resolution topographic descriptors and knowledge of the dynamics via a numerical model output, GPs can infer wind patterns across the territory with improved accuracy and spatial detail. Additionally, GPs inherently provide uncertainty estimates crucial for determining confidence in predictions. Finally, GPs rely on explicit prior knowledge and modeling assumptions, making their predictions interpretable. NPs, on the other hand, leverage neural networks to learn complex, non-linear relationships between inputs and outputs. Furthermore, NPs are scalable and highly efficient in handling large-scale datasets, and their model-less nature allow for more flexibility in modelling complex distributions.

Integration of these machine learning techniques with domain-specific knowledge and data sources will enable the development of robust and accurate models for generating high-resolution wind climatology maps. These maps will closely align with observational data, advancing our understanding of local wind patterns and their impact on various applications. This approach aims to foster interdisciplinary collaboration and innovation in this field.

Note: Drafting the initial version of this abstract has been aided by AI tools.

How to cite: Lloréns Jover, I., Zanetta, F., Isotta, F., Nerini, D., Grams, C. M., and Schwierz, C.: Machine Learning Approaches for High-Resolution Wind Climatology Mapping in Switzerland, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-1097, https://doi.org/10.5194/ems2024-1097, 2024.

14:30–14:45
|
EMS2024-2
|
Onsite presentation
Qifeng Qian, Xiaojing Jia, and Yanluan Lin

Due to a lack of observations and limited understanding of the complex mechanisms of tropical cyclone (TC) genesis, the possible TC activity response to future climate change remains controversial. Previous studies have divergent opinions on how TC activity responds to climate change, as TC activities can be impacted by many environmental variables. One advantage of ML methods is that, compared to traditional analysis methods, they can capture the complex nonlinear relationship between the predictor and predictand; therefore, before the theory of TC genesis is well established, in addition to traditional climate models, ML methods may provide an additional useful tool to predict the possible changes in TC frequency under global warming. Moreover, the ML model may also provide more information about the nonlinear characteristics of TC genesis, which will help to improve our understanding of TC genesis mechanism. In this work, a machine learning model, called the maximum entropy (MaxEnt) model, is established using various environmental variables. The model performs slightly better than the genesis potential index for historical TC activities based on the spatial correlation coefficient. Using coupled model intercomparison project phase 6 model projections, the MaxEnt model predicts a statistically significant decreasing trend of TC genesis probability under all shared socioeconomic pathway scenarios. Further analysis reveals that PI is the most important environmental variable in the MaxEnt model, which provides the most unique and useful information to the model. In addition, our analysis reveals that TC genesis might have a complex nonlinear relationship with potential intensity, which is different from the positive relationship reported in previous studies and might be the key factor leading to the model predicting reduced TC genesis in the future. We further apply principal component analysis to investigate the TC genesis environment in different ocean basins and show that the TC genesis environments are mainly determined by upper and lower level absolute vorticity. The TC genesis environment in basins are classified into three groups and three machine learning (ML) models are build accordingly. These basin-wide models predict a consistent TC genesis trend in each basin under different future scenarios. Further analysis highlight the importance of absolute vorticity for basin-wide TC genesis. A multivariate environmental similarity surface analysis reveals that climate models predict the weakest change in the TC genesis environment in North Atlantic compared to other basins.

How to cite: Qian, Q., Jia, X., and Lin, Y.: The possible change of global and regional tropical cyclone genesis probability in the future as predicted by machine learning models, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-2, https://doi.org/10.5194/ems2024-2, 2024.

14:45–15:00
|
EMS2024-238
|
Onsite presentation
Xiao Yan Huang, Hua Sheng Zhao, Yu Shuang Wu, Li He, and Ying Huang

This study focused on predicting the near-surface maximum wind speed using the eXtreme Gradient Boosting (XGBoost) model based on k-nearest neighbor mutual information feature selection. The data from 93 meteorological stations in Guangxi Province, with a temporal resolution of 3 h, were used for the prediction. By examining the effects of various dynamic and thermal factors, such as high altitudes and surface variables, on the prediction of maximum wind speed, a novel XGBoost-based prediction model for maximum wind speed was proposed. The model incorporates the k-nearest neighbor mutual information feature selection algorithm to choose the most relevant factors for accurate wind speed prediction. In the design of the prediction model, there are two main areas of improvement. First, a stepwise variable selection algorithm based on k-nearest neighbor mutual information estimation was employed, which selects relevant variables and removes weakly relevant variables through two steps, effectively eliminating redundant prediction characteristics that affect accuracy by screening the primary predictors and retaining important forecasting factors. Second, the Bayesian optimization algorithm was used to optimize the parameters in the XGBoost model, significantly enhancing the model's generalizability. The optimized and improved prediction model was utilized to model and research the near-surface maximum wind speed for 6 forecast lead times (12-72 h) at 93 meteorological stations. Comparative results of various forecast experiments using independent prediction samples from 2020 to 2021 demonstrated that the new model reduced the average mean absolute error (MAE) evaluation metric by 18.9% to 30.06% for the prediction results of the 93 stations. The root mean square error (RMSE) metric decreased by 40.18% to 65.83%. For the prediction of maximum wind speeds exceeding level 6, the MAE was reduced by 40.41%, 25.93%, 19.96%, 21.39%, 12.39%, and 8.55% for the 6 forecast lead times, respectively. The RMSE evaluation metric also decreased by 30.92%, 18.67%, 12.29%, 12.21%, 7.92%, and 2.39% for the respective lead times.

How to cite: Huang, X. Y., Zhao, H. S., Wu, Y. S., He, L., and Huang, Y.: An Intelligent Forecasting Method for Near-Surface Extreme Wind Speed Based on K-Nearest Neighbor Mutual Information Feature Selection, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-238, https://doi.org/10.5194/ems2024-238, 2024.

15:00–15:15
|
EMS2024-246
|
Onsite presentation
Hong Lu, Yi Ou, Chuan Qin, and Long Jin

On the basis of the daily temperature and precipitation data of Guangxi and the NCEP/NCAR reanalysis data and forecast field data, the paper aims to determine the significant nonlinearity and temporal variability of the forecast quantity series and the overfitting that can easily appear in the forecast modeling of a single fuzzy neural network model and many adjustable parameters that are difficult to determine objectively. Thus, an ensemble forecasting model of fuzzy neural network bagging for 72-hour forecast of low-temperature chilling injury is developed. The forecast results of independent samples show that under the same forecast modeling sample (N = 299) and forecasting factor (M = 9), the fuzzy neural network bagging ensemble forecasting model obtains a mean absolute error of 13.91. By contrast, the mean absolute errors of the single fuzzy neural network forecasting model and the linear regression forecast are 15.82 and 18.13, respectively. The fuzzy neural network bagging ensemble forecast error is lower by 12.07% and 23.27%, respectively, compared with the latter two methods, showing a better forecasting skill. This improved performance is mainly due to the ensemble individuals of the fuzzy neural network bagging ensemble forecasting model with playback sampling. Different ensemble individuals are obtained. Analyses of the new scheme suggests that the forecast accuracy of the ensemble prediction model has been improved by enhancing the prediction ability and population diversity of the individual ensemble members. Therefore, the generalization capacity of the intelligent computing ensemble prediction model has been significantly enhanced. The ensemble enhances the generalization performance and forecast stability of the fuzzy neural network bagging ensemble forecasting model. Thus, this model has better applicability in forecasting nonlinear low-temperature chilling injury.

How to cite: Lu, H., Ou, Y., Qin, C., and Jin, L.: A Fuzzy Neural Network Bagging Ensemble Forecasting Model for 72-hour Forecast of Low-temperature Chilling Injury, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-246, https://doi.org/10.5194/ems2024-246, 2024.

15:15–15:30
|
EMS2024-248
|
Onsite presentation
Ying Huang, Xiao Yan Huang, Hua Sheng Zhao, and Yu Shuang Wu

 In the current short-term climate prediction of monthly precipitation, there is a lack of nonlinear data mining techniques and objective ensemble forecasting methods of machine learning. A new nonlinear deep learning ensemble objective forecasting model has been established by generating multiple long short-term memory neural networks (LSTMs) with the same expected output as the individual forecasters, and using the cooperative game Shapley value method to determine the weight coefficients of each forecaster in the ensemble forecasting. The forecasting modeling of the monthly precipitation forecasting model has been studied based on the July precipitation samples of 81 meteorological stations in Guangxi from 1960 to 2023, and using height fields, temperature fields, and sea surface temperature field as the basic forecasting factors for monthly precipitation. The experimental results show that under the same forecast modeling samples and forecast factor conditions, the newly established prediction model has higher predictive ability than linear stepwise regression prediction methods and a single LSTM model, demonstrating its applicability to nonlinear monthly precipitation prediction problems. Further analysis reveals that the introduction of storage unit states and gate structures in the hidden layer of the LSTM model enables the network to retain long-term states, making it more suitable for handling and predicting important problems with relatively long intervals and delays in time series. And the Shapley value method can improve the predictive ability of ensemble individuals and enhance the population diversity of ensemble individuals, thereby improving the predictive accuracy of ensemble forecasting models. Therefore, the generalization ability of this deep learning ensemble forecasting model is significantly improved, and the improvement of its forecasting ability has a reasonable analytical basis. There is no overfitting phenomenon in the practical short-term climate prediction business application of general neural network methods, and it has good practical application value.

How to cite: Huang, Y., Huang, X. Y., Zhao, H. S., and Wu, Y. S.: A Monthly Precipitation Ensemble Prediction Model Based on LSTM and Shapley Values, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-248, https://doi.org/10.5194/ems2024-248, 2024.

15:30–15:45
|
EMS2024-829
|
Onsite presentation
Tim Radke, Susanne Fuchs, Christian Wilms, Iuliia Polkova, and Marc Rautenhaus

Detection of atmospheric features in gridded datasets from numerical simulation models is typically done by means of rule-based algorithms. Recently, also the feasibility of learning feature detection tasks using supervised learning with convolutional neural networks (CNNs) has been demonstrated. This approach corresponds to semantic segmentation tasks widely investigated in computer vision. However, while in recent studies the performance of CNNs was shown to be comparable to human experts, CNNs are largely treated as a “black box”, and it remains unclear whether they learn the features for the correct reasons. Here we build on the recently published “ClimateNet” dataset that contains features of tropical cyclones and atmospheric rivers as detected by human experts. We adapt the explainable artificial intelligence technique “Layer-wise Relevance Propagation” (LRP) to the feature detection task and investigate which input information CNNs with the Context-Guided Network (CG-Net) and U-Net architectures use for feature detection. We find that both CNNs indeed consider plausible patterns in the input fields of atmospheric variables, which helps to build trust in the approach. We also demonstrate application of the approach for finding the most relevant input variables and evaluating detection robustness when changing the input domain. However, LRP in its current form cannot explain shape information used by the CNNs, and care needs to be taken regarding the normalization of input values, as LRP cannot explain the contribution of bias neurons, accounting for inputs close to zero. These shortcomings need to be addressed by future work to obtain a more complete explanation of CNNs for geoscientific feature detection.

How to cite: Radke, T., Fuchs, S., Wilms, C., Polkova, I., and Rautenhaus, M.: Explaining neural networks for detection of tropical cyclones and atmospheric rivers in gridded atmospheric simulation data, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-829, https://doi.org/10.5194/ems2024-829, 2024.

15:45–16:00
|
EMS2024-780
|
Onsite presentation
Víctor Galván Fraile, Marta Martín del Rey, Irene Polo Sánchez, María Belén Rodríguez de Fonseca, and Magdalena Balmaseda Alonso

The seasonal predictability of wintertime atmospheric patterns is determined, to a large extent, by the anomalous ocean surface thermal conditions. Specifically, the sea surface temperatures (SST) appears as a significant contributor to the predictability of wintertime atmospheric patterns in the Euro-Atlantic region (EAR). Current seasonal prediction systems rely significantly on the interannual phenomenon known as the El Niño-Southern Oscillation (ENSO).

On the one hand, current seasonal prediction systems predominantly rely on dynamical models, which propagate the signals associated to these forcings to both local and remote areas. However, the complexity of atmospheric processes, the important biases in reproducing SST in the extratropics and the interaction of signals make this propagation much more challenging. On the other hand, traditional statistical techniques (i.e. Maximum Covariance Analysis (MCA)), allows the possibility of making seasonal predictions with less bias, by focusing on the relationship between the predictor and the predictand. Nevertheless, there is a growing interest in exploring non-linear relationships between seasonal anomalies of various physical variables. Deep learning approaches offer promising avenues for modeling such complex relationships. Therefore, this study aims to assess the predictive capability of autumntime (September-October) SST anomalies in forecasting wintertime (November-December and January-February) sea level pressure (SLP) anomalies across the EAR by using two different statistical prediction techniques.

Specifically, the MCA will be used to identify and analyse the dominant patterns of co-variability between SST anomalies in different ocean basins and EAR atmospheric conditions. Additionally, several deep neural network models are developed to capture the complex non-linear atmospheric teleconnections associated with SST anomalies, and their predictive performance is rigorously evaluated over the EAR region. The assessment highlights regions with higher prediction accuracy for the different methods and identifies key sources of skill, particularly over the Pacific basin. Concretely, certain regions show higher skill in the EAR than the one from the ECMWF seasonal forecasting model (SEAS5).

By comparing deep learning methodologies with traditional statistical techniques such as MCA, this study provides a comprehensive analysis of wintertime atmospheric predictability over the EAR. The findings contribute to advancing our understanding of oceanic forced atmospheric teleconnections, not only by establishing windows of opportunity for seasonal forecasts but also by means of analysing possible drivers of these teleconnections. All of these aid in the development of more accurate and reliable prediction models for managing climatological risks in the Euro-Atlantic region.

How to cite: Galván Fraile, V., Martín del Rey, M., Polo Sánchez, I., Rodríguez de Fonseca, M. B., and Balmaseda Alonso, M.: Exploring Euro-Atlantic Winter Seasonal Predictability: A Comparative Analysis of Deep Learning and Maximum Covariance Analysis, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-780, https://doi.org/10.5194/ems2024-780, 2024.

Posters: Tue, 3 Sep, 18:00–19:30

Display time: Mon, 2 Sep 08:30–Tue, 3 Sep 19:30
EMS2024-56
Linna Zhao, Shu Lu, and Dan Qi

Objective forecast of maximum temperature is an important part in numerical weather prediction (NWP). Due to the influence of complex factors such as atmospheric dynamic processes, physical processes and local topography and geomorphology, the prediction of near-surface meteorological elements in the numerical weather model often has deviation. In recent years, on the one hand, meteorological observations expand rapidly, making traditional error correct method difficult to deal with the massive data. On the other hand, artificial intelligence has an increasingly obvious advantage in processing big data.   

In view of this, here we correct the European Centre for Medium-Range Weather Forecasts (ECMWF) Integrated Forecast System (IFS) both general statistics and local events, by post-processing its maximum temperature output with a deep neural network. Based on the fully connected neural network, four sensitivity experiments are designed in order to investigate the importance of auxiliary variable, time-lagged variable and the effectiveness of embedding layer in the neural network. The observations of basic meteorological elements of totally 2238 basic weather stations and the output of NWP during 15 January 2015 to 31 December 2020 are employed. The training period is from 15 January 2015 to 31 December 2019, and the rest part is test period. The results show that the forecast error of daily maximum air temperature from the IFS in test period is reduced greatly by the sensitivity experiments, which add auxiliary variables, daily maximum air temperature with 1-2 lag days and embedding layer structures and their combination. The root mean square error is reduced by 29.72%-47.82% and the accuracy of temperature forecast are increased by 16.67%-38.89%, and the effects for Qinghai-Tibet Plateau is especially remarkable where the forecast error of IFS model is very high. It is preliminarily proved that the fully connected neural network with embedding layer has better overall performance than the raw fully connected neural network, and the features also affect the forecast errors and forecast skills of the model. Besides, the prediction error of neural network model with embedding layer is more stable when auxiliary variables and lag time variables are added. Positive forecasting techniques are available for almost all stations in the study, and it is possible to reduce the mean absolute error to less than 1℃ at many stations. Tests of 1-year daily maximum temperature forecasts at four in situ stations show that FCNN forecasts with embedded layers are closest to observations, both for the whole year and for extreme points.

How to cite: Zhao, L., Lu, S., and Qi, D.: Deep Learning for Improving Numerical Weather Prediction of Daily Maximum Air Temperature, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-56, https://doi.org/10.5194/ems2024-56, 2024.

EMS2024-180
Agnieszka Krzyżewska

Recent years have witnessed significant advancements in the development of Artificial Intelligence (AI) tools, notably Large Language Models (LLMs), with prominent systems including ChatGPT by OpenAI, Gemini by Google, and Copilot by Microsoft. Despite inherent limitations, the diversity of these tools' applications across various fields of life, including scientific research, has expanded significantly.

This study evaluates the utility of various AI tools within the fields of meteorology and climatology, ensuring their applications follow ethical standards in scientific publication. The tools assessed include ChatGPT versions 3.5 and 4.0, Gemini (Google), Copilot (Microsoft), Perplexity, and GPT-based systems such as DataAnalyst, Consensus, ScholarGPT, and Academic Assistant Pro, among others. Each tool was subjected to identical inputs (prompts, data, photographs) and their responses were evaluated on a 0-10 scale for accuracy and relevance. The scoring was based on the percentage of verifiable content in the responses to ensure objectivity. The research spanned from May 2023 to April 2024.

The AI systems were tasked with responding to queries on climate change in Poland, identifying key research papers on humid heat waves, classifying cloud types, creating a climate map from provided data, and comparing two climate maps.

The outcomes varied significantly across tasks. ChatGPT 3.5 demonstrated an answer accuracy of 30-40% (topic: climate change in Poland). The Consensus system excelled in identifying and summarizing key papers on humid heat waves research. ChatGPT 4.0 emerged as the most effective tool for cloud classification, with Copilot also delivering commendable results; however, Gemini (Advanced) struggled with cloud recognition tasks. DataAnalyst proved capable of generating basic climate maps, but with some inaccuracies such as station misplacements. When comparing two climate maps, all systems performed adequately, with the most precise descriptions provided by Bard (Google).

How to cite: Krzyżewska, A.: The application of AI tools in weather and climate science, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-180, https://doi.org/10.5194/ems2024-180, 2024.

EMS2024-287
Yu-shuang Wu, Xiao-yan Huang, and Hua-sheng Zhao

Aiming at the lack of nonlinear intelligent computational modelling methods for the fixed-point and quantitative forecasting of typhoon gales in the current numerical forecast products, the paper takes the daily extreme winds at five typical representative meteorological stations (Guilin, Wuzhou, Longzhou, Nanning, Yulin) as the forecast object, and carries out the construction of a daily extreme wind forecast model based on multivariate linear regression (MR), support vector machine (SVM), fuzzy neural network (FNN), and the ground-based observation and reanalysis of the data of the typhoon in the past 40 years during the typhoon impact in Guangxi. The construction of the daily maximum wind prediction model based on multiple linear regression (MR), support vector machine (SVM) and fuzzy neural network (FNN) is carried out. The test results of the independent samples show that the FNN model has the smallest mean absolute error for the four stations of Guilin, Wuzhou, Longzhou, and Yulin in terms of the mean absolute error of the full-sample wind speed forecast, and the best overall forecast accuracy, while the MR forecast model has a better forecast capability for Nanning station, and the SVM model has an overall bias in the forecast effect, in which the FNN forecast model has a 1% to 1% reduction in mean absolute error compared with that of MR. The mean absolute errors of the FNN forecast model are reduced by 1%-29% (except for Nanning station); the mean absolute errors of the FNN forecast model are reduced by 6%-29% compared with the SVM forecast model. the mean absolute errors of the MR forecast model are reduced by 5%-13% compared with the SVM forecast model (except for Guilin station). The statistical results of the four evaluation indexes, including TS score, hit rate, null rate and forecast bias, for winds of magnitude 6 or above show that the FNN model has the highest and relatively stable prediction accuracy, followed by the MR scheme, and the SVM has the worst prediction effect among the three schemes. The fuzzy neural network has certain applicability to the prediction of very high wind speed, which can be a good reference for the prediction of daily very high wind speed on the ground during typhoons in Guangxi, and can provide theoretical references and empirical evidence basis for the later development of the research on the prediction of high wind disasters in Guangxi.

How to cite: Wu, Y., Huang, X., and Zhao, H.: Research on machine learning-based modeling method for forecasting Maximum wind speed of typhoon in Guangxi , EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-287, https://doi.org/10.5194/ems2024-287, 2024.

EMS2024-322
Enric Casellas Masana, Josep Ramon Miró Cubells, and Jordi Moré Pratdesaba

Uncertainty in numerical weather prediction (NWP) models can arise from various sources, such as initial conditions or model parameterizations. Ensemble forecasts, typically generated through perturbed initial conditions or diverse model physics, help address and quantify the uncertainty inherent in raw NWP models. However, these forecasts may still contain biases and dispersion errors, traditionally mitigated using non-homogeneous Gaussian regression (Ensemble Model Output Statistics, EMOS) (Gneiting et al., 2005). Nevertheless, emerging machine learning techniques, like Distributional Regression Networks (DRN) (Rasp and Lerch, 2018), are capable of handling nonlinear relationships between predictors and forecast distributions often yielding similar or superior results. 

At the Meteorological Service of Catalonia (SMC), a Poor Man’s Ensemble (PME) composed by 12 members is constructed using 8 different models: Arome, Arpege, Bolam, ECMWF-HRES, Icon, Moloch, Unified Model, and WRF. These models vary in spatial resolution and are interpolated to a 1 km grid using a lapse-rate correction methodology, accounting for altitude differences between model orography and 1 km digital elevation model (Sheridan et al., 2010). 

The postprocessing of this multi-model ensemble is conducted at point station locations utilizing data from the SMC automatic weather station network as ground truth. A benchmark methodology, EMOS, is applied using an IMPROVER (Roberts et al., 2023) module to calculate a calibration for each station and lead time of the ensemble. The forecast of each model is set as a predictor variable, rather than the commonly used mean and standard deviation of the ensemble. This approach is then compared with a single DRN for each lead time, incorporating all stations via an embedding technique, and using the same predictors. Results indicate a comparable but generally improved performance for DRN compared to EMOS. 

  

References 

Gneiting, T., Raftery, A. E., Westveld, A. H., & Goldman, T. (2005). Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation. Monthly Weather Review, 133(5), 1098-1118. 

Rasp, S., & Lerch, S. (2018). Neural networks for postprocessing ensemble weather forecasts. Monthly Weather Review, 146(11), 3885-3900. 

Roberts, N., Ayliffe, B., Evans, G., Moseley, S., Rust, F., Sandford, C., ... & Worsfold, M. (2023). IMPROVER: the new probabilistic postprocessing system at the Met Office. Bulletin of the American Meteorological Society, 104(3), E680-E697. 

Sheridan, P., Smith, S., Brown, A., & Vosper, S. (2010). A simple height‐based correction for temperature downscaling in complex terrain. Meteorological Applications, 17(3), 329-339. 

 

How to cite: Casellas Masana, E., Miró Cubells, J. R., and Moré Pratdesaba, J.: Postprocessing multi-model ensemble temperature forecasts using Distributional Regression Networks, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-322, https://doi.org/10.5194/ems2024-322, 2024.

EMS2024-344
Steven Ramsdale

The professional journey of meteorologists working in the domain for more than ten years is likely to have included a huge increase in the quality and quantity of available data for decision making. In numerical modelling this has included the move from coarse resolution (>40km) global models to high resolution and even sub km scale convection permitting models as well as the increasing acceptance and use of ensemble systems to aid representation of uncertainty. There is also an increasing demand for forecasts of the impact, not solely occurrence, of weather and so further data such as demography, land use and event timelines must be interrogated to understand the complex combination of factors.

Whilst this increase in data and customer expectation has occurred the operational meteorologist forecast process has remained much the same in framework to that of decades ago – a transition through scales from an initial assessment of the broadscale atmospheric state to an appropriate level of reliable detail at the meso or microscale.

The result is that operational meteorologists become highly adept at weather regime recognition to understand potential hazards and uncertainties without needing to access the full range of data available. This knowledge is then used to explore what the meteorologist judges to be the most important parts of the forecast in greater detail before making final forecast and warning decisions. This process has served the community well but the increasing pressures on the profession to make ever more efficient and accurate decisions with the increasing data volumes can lead to information overload and the reliance on familiar, but not necessarily the highest value, data sources and parameters.

Machine Learning, often colloquially referred to under the umbrella term of Artificial Intelligence (AI), presents an opportunity to update the forecast process utilising the ability to effectively process large multidimensional datasets along with previous human-based decisions, expertise and downstream impacts. The combination of this data and knowledge brings the potential to assess the weather from a hazard specific point of view, allowing recommendation of forecast times, locations or tools for final human decision-making. Research is underway into this application of machine learning, using an archive of issued severe weather warnings and coarse resolution weather data. When applied to the forecast process, along with the caveat that forecast data is not perfect and so human intervention and intuition remains vital to effective decision making, this helps envision a future state where operational meteorologists focus on specific highlighted hazards leading to more efficient utilisation of time and effort. This future state would allow more efficient focus of time where it matters, allowing for more effective decision making and releasing meteorologist time to use their expertise elsewhere other than in product generation. 

How to cite: Ramsdale, S.: Towards a Future Forecast Process – How to better utilise human knowledge and intuition in a world of too much information, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-344, https://doi.org/10.5194/ems2024-344, 2024.

EMS2024-491
Marc Benitez, Tomàs Margalef, Mirta Rodríguez, and Omjyoti Dutta

The ability to obtain high spatial resolution meteorological data from coarse sources is a crucial skill needed to study local phenomena happening at finer scales such as severe storms or convective systems. This spatial downscaling can be achieved by reproducing the atmospheric state of a small region using numerical weather prediction (NWP) models that use low-resolution (LR) data as boundary conditions. However, running NWP models at high resolutions is computationally expensive and time consuming. A different approach is to establish statistical relationships between LR and HR data to increase the spatial resolution by interpolating intermediate points. In recent times machine learning (ML) based statistical methods have proven to be a cheap yet accurate alternative to dynamical downscaling. 

This work aims to develop a downscaling methodology from ERA5 to Weather Research and Forecasting (WRF) data based on deep learning. We study how the training dataset affects the downscaling performance and generalization capabilities of deep learning models and how it compares against traditional downscaling methods such as bilinear interpolation. Our models estimate the downscaling function for daily average 2-meter air temperature, between a LR dataset, and a high-resolution (HR) Weather Research and Forecasting (WRF) model outputs. The LR inputs come from different sources for each model. The first dataset is created by upscaling the HR WRF ground truth data to our target LR, and the second one is the ERA5 reanalysis used as boundary conditions to drive the NWP simulation. For validation purposes, we select data from regions that share similar climatology with data present in the training set that has been excluded from the training. To evaluate the performance of the model, we use Root Mean Square Error (RMSE) and metrics typically used in image super resolution problems such as Peak Signal-Noise Ratio (PSNR), and Structural Similarity Index Measure (SSIM). 

With this study we have taken a first step in the ML modelling of weather downscaling and its generalization capabilities. However, further work is needed to understand the capabilities and behavior of these models when faced with challenges such as reproducing local-scale patterns, downscaling discrete variables (e.g. precipitation, hail) or the transferability of their results to similar climatic zones outside the simulation domain. Lastly, in future works we plan to study the performance of different deep learning model architectures, such as Vision Transformers or Latent Diffusion, on downscaling. 

How to cite: Benitez, M., Margalef, T., Rodríguez, M., and Dutta, O.: Comparing Deep Learning methodologies for Downscaling between meteorological models, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-491, https://doi.org/10.5194/ems2024-491, 2024.

EMS2024-506
Jasmin Vural, Pierre Vanderbecken, Bertrand Bonan, and Jean-Christophe Calvet

In the framework of the EU-project CORSO, a multitude of in-situ and remote-sensing observations are exploited to better quantify the anthropogenic part of the CO2 emissions. Here, we use microwave satellite observations from the SMAP and AMSR2 instruments to improve the estimation of the state of carbon cycle variables.

We employ the LDAS-Monde system using a simplified extended Kalman filter and the ISBA land surface model within the SURFEX modelling platform to assimilate both H and V polarisation of the brightness temperatures in different microwave bands provided by the respective satellite observations. As the classical approach of using a radiative transfer model as a forward operator is often computationally very expensive, artificial neural networks are a promising method to transform the model variables into observation space consuming only a relatively small amount of computing resources during the assimilation.

In our study, we train a feedforward neural network on predictors extracted from the open loop run of LDAS-monde on the European and the global domain, respectively, employing models with different grid sampling. We perform tests using different setups of hyperparameters in the neural network and different combinations of predictors using not only model variables but also AVHRR LAI (leaf area index) observations provided by THEIA. To assess the relative importance of the employed predictors, we performed sensitivity analyses on the training results. We found that the temperature and moisture of the upper soil layer as well as the LAI play a major role but useful information can also be extracted from static features such as the latitude and different topographic measures. Special care has to be given to using coordinates as predictors to avoid overfitting.

We implement the weights found with our best setup for each instrument into the LDAS-Monde data assimilation system. Eventually, we verify the effect of the assimilation on LAI analyses on both European and global domains against LAI observations and evaluate the performance of the system with regard to different land covers.

How to cite: Vural, J., Vanderbecken, P., Bonan, B., and Calvet, J.-C.: A neural network as observation operator for the assimilation of microwave satellite observations, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-506, https://doi.org/10.5194/ems2024-506, 2024.

EMS2024-536
Marcos Martinez-Roig, Nuria P. Plaza, Cesar Azorin-Molina, Miguel Andres-Martin, Deliang Chen, Zhengzhong Zeng, Sergio M. Vicente Serrano, Tim R. McVicar, Jose A. Guijarro, and Amir Ali Safaei Pirooz

The generation of accurate and reliable forecasts of near-surface (~10 m above ground level) gridded wind speed data, hereinafter called NSWS, is crucial since it influences numerous socioeconomic and environmental fields. For instance, in the face of climate change, wind energy can contribute to the decarbonization of the electricity grid. NSWS, however, is a complex meteorological variable due to its inherent space-time variability, particularly in regions with complex topography like Valencia (Spain).

The traditional approach to forecasting NSWS relies on Numerical Weather Prediction (NWP) models, which demand substantial computational resources, specially when high spatial and temporal resolution are required, often necessitating hundred to thousands of CPU hours. As an innovative solution to this pressing issue, the ThinkInAzul project, under Climatoc-Lab, is exploring the use of deep learning for accurate NSWS predictions. We propose an architecturebased on encoder-decoder neural networks composing mixed convolutional and recurrent  (ConvLSTM) layers. This AI-based product, designed as an early warning system, generate high-resolution (3- or 9-km) short-term (i.e., <24 hours) NSWS forecasts in near real-time (a few seconds) using a GPU.

Meteorological station networks provide realistic observations, being able to detect local wind effects, but with limited spatial coverage. Conversely, reanalysis and simulation products offer complete spatial coverage at low resolution but fail to accurately reproduce local NSWS. To address this, our AI-based tool is trained with the ERA5-Land (9-km) and NEWA (New European Wind Atlas, 3-km) NSWS datasets but its inference is performed using the observations from the Spain/Valencian Association of Meteorology (AEMET/AVAMET), a citizen weather station network of around ~600 stations. Consequently, the AI-based tool merges the advantages of both, offering a gridded product with high spatio-temporal resolution that can reproduce local NSWS effects.

The AI-based tool achieves a reasonably high correlation of 0.7 with the AEMET meteorological observations, with expectation of further improvement. This tool is applied to the western Mediterranean coast and has the potential for use in other regions following retraining of the neural network. Our ultimate goal is to develop an AI-based tool that enhance short-term forecasting of NSWS.

How to cite: Martinez-Roig, M., P. Plaza, N., Azorin-Molina, C., Andres-Martin, M., Chen, D., Zeng, Z., Vicente Serrano, S. M., R. McVicar, T., Guijarro, J. A., and Ali Safaei Pirooz, A.: AI-based approach for short-term forecasting of wind speed from a weather station network: A Case study in Valencia, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-536, https://doi.org/10.5194/ems2024-536, 2024.

EMS2024-669
Cristina Campos, Yolanda Sola, Mireia Udina, Joan Bech, and Laura Trapero

Air pollution is currently a major environmental issue to human health and natural ecosystems so improving air quality monitoring techniques, traditionally based on ground-based observation networks, is essential. Satellite remote sensing of air pollutants has made significant strides in recent years and now serves as a complementary data source alongside ground sensors. For example, different studies have explored the relationship between satellite-derived NO2 total column data and ground-level concentration but none of them focused on complex terrain areas. The aim of this work is to evaluate the feasibility of using NO2 column data from the Sentinel 5P satellite over complex terrain such as the Pyrenees Mountain area covering France, Spain and Andorra to estimate ground level values. For this purpose, a number of models considering the separation of temporal average and fluctuations are considered for both satellite and ground sensor data. The primary objective of these models is to enhance the signal-to-noise ratio. Initially, the periodicities are identified and subtracted from the original data, resulting in a residual series. These residual series are then filtered to eliminate noise while retaining the significant events. Finally, these new series are combined with the previously identified periodicity.  

Preliminary results over Andorra show that our models can enhance Pearson's correlation between the temporal series of the satellite and ground sensor, improving it from 0.415 to 0.650. In addition, it has been found that the NO2 annual cycle in Andorra can be detected with a correlation of 0.950 between the model and the ground sensor NO2 series. Furthermore, a weekly cycle during winter has been detected in the Sentinel NO2 series too. These findings suggest that satellite estimates can identify days with high risk of exceeding NO2 ground level thresholds, enabling the creation of risk maps for areas lacking ground sensors. Such results could profoundly impact air quality monitoring in major towns located in valleys of mountain areas. Peak concentrations that deviate from average cycles have also been quantified. These deviations will be compared with other locations characterized by simpler topography to gain a deeper understanding of the limitations of satellite estimates. Subsequently, the next phase involves integrating these models into Machine Learning Algorithms to expand the application of Sentinel 5P data to complex terrain areas. This study is supported by the project “Towards a climate resilient cross-border mountain community in the Pyrenees (LIFE-SIP PYRENEES4CLIMA)”.

How to cite: Campos, C., Sola, Y., Udina, M., Bech, J., and Trapero, L.: Monitoring ground level nitrogen dioxide concentration in complex terrain areas using satellite Sentinel 5P total column observations, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-669, https://doi.org/10.5194/ems2024-669, 2024.

EMS2024-791
Angelos Chasiotis, Elissavet Feloni, Panagiotis Nastos, Sofia Gialama, and Dimitris Piromalis

Climate change has intensified the severity and frequency of natural disasters, with rising global temperatures exacerbating droughts and intensifying rainfall, resulting in devastating flood events. The Municipality of Zagori, located in the Epirus region, Greece, experiences recurrent flash floods, particularly during the autumn and early winter months. To address this challenge, the SMILE project, funded by the Greek Government, aims to develop a localized forecasting system tailored to the specific needs of the Zagori Municipality, integrating machine learning techniques with traditional hydrological models.

This project proposes a comprehensive tool equipped with a monitoring system designed to provide real-time data on hydrometric and meteorological parameters. Leveraging machine learning algorithms, such as neural networks and ensemble methods, alongside traditional statistical and physical models, the SMILE system enhances the accuracy and reliability of weather and flood predictions for the Municipality of Zagori.

The SMILE system offers a user-friendly online platform, allowing stakeholders to access and process data from connected sensors, including hydrometric stations along torrents and meteorological stations across the watershed. Advanced feature engineering techniques are employed to extract meaningful information from large and diverse datasets, facilitating the development of robust prediction models.

Moreover, the system incorporates sensors connected to dataloggers with internal 4G modems, enabling real-time monitoring and interoperability with a 1D/2D hydraulic model. This hydraulic model, enhanced by machine learning insights, focuses on critical areas prone to flash floods, aiming to issue timely warnings and mitigate potential risks more effectively.

By integrating machine learning techniques with traditional hydrological models, the SMILE project seeks to enhance early warning capabilities and improve disaster preparedness in the Municipality of Zagori. The development of this localized forecasting system represents a proactive approach to address the impacts of climate change and mitigate the adverse effects of extreme weather events in vulnerable regions.

How to cite: Chasiotis, A., Feloni, E., Nastos, P., Gialama, S., and Piromalis, D.: Integrating Machine Learning for Localized Forecasting System to Mitigate Flash Flood Events in the Municipality of Zagori, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-791, https://doi.org/10.5194/ems2024-791, 2024.

EMS2024-878
Vadim Becquet, Hadrien Verbois, Philippe Blanc, and Yves-Marie Saint-Drenan

The accurate estimation of Surface Solar Irradiance (SSI) is crucial in domains as diverse as climatology, solar energy, agriculture, and architecture. Traditional SSI estimation methods are primarily based on physical models and cloud-index models. These approaches rely on the Independent Pixel Approximation (IPA) and neglect the intricate inter-pixel interactions, 3D effects of clouds, or parallax effects. This reliance on IPA and oversight of spatial dynamics could introduce limitations to traditional methods. These limitations are expected to increase with the advent of third-generation geostationary satellites like the GOES series, which offer enhanced spatial resolution. This work introduces a deep learning framework leveraging the increased spectral, spatial, and temporal resolution offered by third-generation geostationary satellites, without IPA, to improve SSI estimation.

We developed a method using convolutional neural networks (CNNs) to analyze large satellite imagery, high-dimensional in spatial, spectral, and temporal domains, using contextual and multispectral image for SSI estimation. A comprehensive dataset, combining GOES-16 satellite imagery with 5-min global horizontal irradiance (GHI) in-situ measurements from 31 pyranometric stations in the U.S.A. over three years, was constructed and used for model training and validation, allowing for a direct comparison with PSM3, a state-of-the-art physical SSI-satellite-retrieval model from NREL. Our approach combines CNNs for image analysis and fully connected neural networks (FCNs) for processing tabular auxiliary data such as solar angles and positions, exploring various data fusion techniques. We thoroughly assess the model performance using a broad set of metrics, across various conditions and test stations, as well as the influence of varying image sizes on performance.

Results demonstrate the potential of deep learning to outperform traditional models like PSM3 with traditional comparison metrics, especially under cloudy conditions, showing a 25% RMSE improvement. Our analysis highlights the importance of spatial context and the influence of image size in model performance, challenging the adequacy of IPA in traditional methods. A significant improvement is the effect of rotating input images, which substantially enhanced test performance and spatial generalization.

For 5-min GHI estimation, our models achieved a test RMSE of 80 W/m^2, compared to 97 W/m^2 for PSM3, and demonstrated their robustness across diverse evaluation metrics, in most test stations and under various sky conditions. However, the mixed performance in MBE across all sky conditions, as well as other metrics under clear sky conditions and at specific test stations, indicate areas for further improvements in the representativity of the underlying physical process of SSI.

While initial results are promising, further research is needed to refine model architectures and enhance generalization capabilities across different geographical locations Exploring physically informed and probabilistic deep learning methods could be a valuable direction for future research to enhance the spatial generalization, reliability, and interpretability of SSI estimation with deep learning.

How to cite: Becquet, V., Verbois, H., Blanc, P., and Saint-Drenan, Y.-M.: Leveraging Deep-Learning Approaches with Spatial Context for Enhanced Surface Solar Irradiance Estimation from Third-Generation Geostationary Satellite Imagery, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-878, https://doi.org/10.5194/ems2024-878, 2024.

EMS2024-750
Pascal Gfäller, Irene Schicker, and Petrina Papazek

With the increasing shift to renewable energy sources, their predictability is becoming more of a concern. While some renewables follow more stable power output patterns, solar irradiance and thereby photovoltaic power production can experience significant shifts in shorter timeframes. The great potential of solar irradiance as a power source should although not remain underused, as it globally provides orders of magnitudes more energy to the earth than currently or foreseeable required. Solutions lie in forecasts for different timescales, with the short-term and nowcasting domains being able to provide the most accurate insights to volatility introduced by atmospheric phenomena. These forecasts are not only relevant to determine the expected economic impact but also to maintain an equilibrium in electrical grids with the goal of minimizing the waste of potential power production from solar irradiance.

Large-grid-nowcasts of solar irradiance can substitute forecasting of solar power potential for individual sites, which are typically derived from the sites’ measurements. With models using satellite data instead, forecasts for large-areas are available, which are useful to approximate the solar intensity for a range of increasingly spatially distributed photovoltaic power stations. Satellite data is in contrast to ground-based data sources or NWP model estimates less reliant on the proper workings of a wide range of externalities and readily available in near-real-time.

Based on a study of multiple convolutional-recurrent neural network architectures, deriving nowcasts from a single solar irradiance satellite-data-product, further research is undertaken to determine the limitations and benefits of single-irradiance-feature nowcasts and counteract potential detriments.

A known issue with this kind of pipeline lies in its main benefit: The easy near-real-time access to a single dynamic feature can bring the whole model to a halt if an issue occurs with this single externality. To gather insights and provide a practical solution to this problem a further study on model robustness to missing data is undertaken, leading to the technique Timestep-Dropout. Via probabilistic removal of irradiance frames during training, neural networks can learn to expect missing frames in inference and still derive forecasts from the remaining valid frames.

Possible benefits in forecast accuracy through the introduction of further features from satellite data or data from other sources may outweigh the additional burden of requirements, and provide overall improvements. To estimate these effects a comparison to the multi-irradiance-feature trained model will provide insights into a reasonable balance of irradiance-feature requirements.

How to cite: Gfäller, P., Schicker, I., and Papazek, P.: Robust ML-nowcasting of solar irradiance from satellite derived features, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-750, https://doi.org/10.5194/ems2024-750, 2024.