OSA3.3 | Spatial climatology
Spatial climatology
Convener: Ole Einar Tveito | Co-conveners: Christoph Frei, Gerard van der Schrier, Cristian Lussana
Orals
| Fri, 06 Sep, 09:00–13:00 (CEST)
 
A111 (Aula Joan Maragall)
Posters
| Attendance Thu, 05 Sep, 18:00–19:30 (CEST) | Display Thu, 05 Sep, 13:30–Fri, 06 Sep, 16:00|Poster area 'Galaria Paranimf'
Orals |
Fri, 09:00
Thu, 18:00
Spatially comprehensive representations of past weather and climate are an important basis for analyzing climate variations and for modelling weather-related impacts on the environment and natural resources. Such gridded datasets are also indispensable for validation and downscaling of climate models. Increasing demands for, and widespread application of grid data, call for efficient methods of analyses to integrate the observational data, and a profound knowledge of the potential and limitations of the datasets in applications.

Modern spatial climatology seeks to improve the accuracy, coverage and utility of grid datasets. Prominent directions of the actual development in the field are the following:

• Establish datasets for new regions and extend coverage to larger, multi-national and continental domains, building on data collection and harmonization efforts.
• Develop datasets for more climate variables and improve the representation of cross-variable relationships.
• Integrate data from multiple observation sources (stations, radar, satellite, citizen data, model-based reanalyses) with statistical merging, machine learning and model post-processing.
• Extend datasets back in time, tackling the challenges of long-term consistency and variations in observational density.
• Improve the representation of extremes, urban climates, and small-scale processes in complex topography.
• Quantify uncertainties and develop ensembles that allow users to trace uncertainty through applications.
• Advance the time resolution of datasets to the sub-daily scale (resolve the diurnal cycle), building on methods of spatio-temporal data analysis.

This session addresses topics related to the development, production, and application of gridded climate data, with an emphasis on statistical analysis and interpolation, inference from remote sensing, or post-processing of re-analyses. Particularly encouraged are contributions dealing with new datasets, modern challenges and developments (see above), as well as examples of applications that give insights on the potential and limitation of grid datasets. We also invite contributions related to the operational production at climate service centers, such as overviews on data suites, the technical implementation, interfaces and visualisation (GIS), dissemination, and user information.

The session intends to bring together experts in spatial data analysis, researchers on regional climatology, and dataset users in related environmental sciences, to promote a continued knowledge exchange and to fertilise the advancement and application of spatial climate datasets.

Orals: Fri, 6 Sep | A111 (Aula Joan Maragall)

Chairperson: Ole Einar Tveito
09:00–09:15
|
EMS2024-177
|
Onsite presentation
Matthieu Sorel, Malo Tiriau, and Stéphane Van Hyfte

Climate normals have been set up at Météo-France to characterize climate trends and to situate a day in its climatic context. First, these normals have been calculated for more than a thousand reference observations and have been recently updated with the 1991-2020 reference as stated by the World Meteorological Organization.

In order to assess more accurate temperature climatology of France, a high-resolution temperature climatology over France has been performed, based on a spatialization of extreme daily temperatures (called ANASTASIA) on a 1 km regular grid using a regression-kriging method. ANASTASIA data are available 1947 onwards.

Classic temperatures climatological indices are computed such as number of days above or below temperature thresholds, mean temperatures, etc. Less common indices are computed like the first/last date of temperature threshold or the climatological equivalent date of an observed temperature for a given day.

Those climatology variables are of great interest in daily climate monitoring in order to fully understand exceptional meteorological situations such as heatwaves. That is why real-time products are also produced, sometimes with an indication of the geographical area impacted as a percentage of France total area, and latter with the number of people impacted.

 

Records on extreme temperatures and the year of the record, earliest/latest date of temperature threshold are also computed in order to complete the climatological dataset.

ANASTASIA product is of great interest and allows climatologists to have a better spatial representation of temperature patterns. However, the method used in ANASTASIA does not reproduce well inversion temperatures for example and may lead to great errors in such situations in mountain areas as the product is highly dependent on the observation network.

How to cite: Sorel, M., Tiriau, M., and Van Hyfte, S.: A high-resolution temperature climatology over France using a spatialization of daily temperatures extremes from 1947 to present, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-177, https://doi.org/10.5194/ems2024-177, 2024.

09:15–09:30
|
EMS2024-643
|
Onsite presentation
Konstantinos V. Varotsos, Anna Karali, Gianna Kitsara, Giannis Lemesios, Platon Patlakas, Maria Hatzaki, Vasilis Tenentes, George Katavoutas, Athanasios Sarantopoulos, Aristeidis G. Koutroulis, Manolis G. Grillakis, Basil Psiloglou, and Christos Giannakopoulos

CLIMADAT-hub is a two-year project within the framework of H.F.R.I call “Basic research Financing (Horizontal support of all Sciences)” under the National Recovery and Resilience Plan “Greece 2.0” funded by the European Union – NextGenerationEU. The project aims at bridging the gap between the available climatic information and the information required for assessing climate risks at the local scale by creating high resolution observational gridded datasets, as well as statistically downscaled seasonal forecasts and climate change projections for Greece.

Regarding the observational gridded datasets, the primary goal is to construct a state-of-the art, 1km high resolution gridded dataset for daily temperature, precipitation, relative humidity and wind speed for the period 1981-2021. Starting with temperature and precipitation, long term daily values have been collected from various meteorological networks and databases for a large number of locations in Greece. These raw daily data underwent quality control and homogenization. To obtain the daily gridded values for air temperatures (maximum, minimum, mean) and precipitation, at a spatial resolution of 1km the following methods have been examined: i) combination of classical geo statistical methods such as Thin Plate Splines and Kriging as used in the early versions of E-OBS and Iberia01, ii) Regression-Kriging, a spatial prediction technique commonly used in geostatistics that combines a regression of the dependent variable (e.g., temperature) on auxiliary/predictive variables (e.g., elevation, distance from shoreline) with kriging of the regression residuals (similar to EOBSv17 and afterwards), iii) ensemble machine learning, an approach to modeling, where instead of using a single best learner, multiple strong learners are used and, consequently, are combined into a single union, iv) hybrid method, where the available observations are blended with the Weather Research and Forecasting (WRF) model to produce the high resolution observed gridded datasets through gridding and bias adjustment techniques.

We report the results from evaluating all the created gridded datasets for temperature and precipitation against withheld station data to determine the best performing approach for each variable. Future work will extend these methodologies to include the remaining variables.

How to cite: Varotsos, K. V., Karali, A., Kitsara, G., Lemesios, G., Patlakas, P., Hatzaki, M., Tenentes, V., Katavoutas, G., Sarantopoulos, A., Koutroulis, A. G., Grillakis, M. G., Psiloglou, B., and Giannakopoulos, C.: High resolution observational daily gridded dataset for Greece: The CLIMADAT-hub project, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-643, https://doi.org/10.5194/ems2024-643, 2024.

09:30–09:45
|
EMS2024-905
|
Onsite presentation
Louis Frey and Christoph Frei

At present, most operational climate grid datasets are available with a temporal resolution of one day. This can be limiting in applications that rely on an explicit representation of the diurnal cycle, such as when modelling temperature-dependent environmental processes (e.g. transpiration). In this study we propose a methodology for the construction of hourly datasets of surface air temperature and present results from its application over Switzerland. The method uses a spatio-temporal (ST) statistical model. Unlike classical interpolation, the ST approach exploits data from a period of times simultaneously, and therefore allows to formally represent the connection of spatial and temporal variations. This is particularly desirable in complex terrain, where the diurnal cycle of temperature has marked topographic imprints. As ST model we use a dynamic linear model (DLM), which is a conceptual extension of kriging with external drift (KED), and offers high flexibility to be configured to the specifics of a region. In our application, for example, the DLM incorporates model components for cold-air pooling, basin-scale inversions, and the lake- and valley effects.

We present results from an application of the method to several multi-day episodes with specific weather conditions and from a continuous application over several months. The results illustrate that the method is capable to represent complex spatio-temporal variations, including the build-up and decay of an inversion and distinct diurnal variations in valleys, the flatland and at lake shores. Comparison of the spatio-temporal DLM to a spatial-only KED shows added value of DLM in terms of enhanced temporal continuity of trend coefficients. However, cross-validation statistics are not significantly different between the two approaches. Further experiments suggest that this is due to the dense station coverage in Switzerland. In regions / times where data is sparser, or when model complexity is substantially increased, spatio-temporal modelling is expected to provide a measurable improvement over spatial-only modelling. Our presentation will illustrate the results of our application in films of the temperature evolution. We will also introduce the extensions, both methodological and technical, implemented for the continuous multi-month application, and quantify the predictive performance of the method in this setting.

Our study provides valuable insight on extending a familiar interpolation concept (KED) into a spatio-temporal model, suitable to produce sub-daily temperature grid datasets.

How to cite: Frey, L. and Frei, C.: Gridding of Hourly Surface Air Temperature – Application of a Spatio-Temporal Statistical Model in Complex Terrain, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-905, https://doi.org/10.5194/ems2024-905, 2024.

09:45–10:00
|
EMS2024-515
|
Onsite presentation
Line Båserud, Eli Holm Høgemo, Ivar Ambjørn Seierstad, Cristian Lussana, Amélie Neuville, and Thomas Nils Nipen

Today, the volume of data monitoring atmospheric conditions near the Earth’s surface is vast and continually expanding. Among the most rapidly developing sources are observational networks composed of stations managed by citizens. From the perspective of national meteorological services, this citizen-generated data presents an opportunity to enhance the existing networks of traditional weather stations operated by public institutions. For variables like temperature and precipitation, the presence of a dense network that delivers data at hourly or finer sampling rates enables the detailed reconstruction of weather phenomena occurring between the microscale and the mesoscale (i.e. from hundreds of kilometres down to 1-2 kilometres or less). The specific applications of the work we present are: i) development of quality control tests; ii) development of post-processing of observational and gridded datasets, aiming at providing enhanced gridded datasets reconstructing atmospheric variables near the surface.

In previous studies, we analyzed hourly precipitation data collected by a network of citizen-operated stations in Finland, Norway, and Sweden from September 2019 to October 2022. These observations were gathered using Netatmo weather stations, which are commercially available and installed in private homes for various purposes, including home automation. We compared this crowdsourced data with reference observations from WMO-compliant stations managed by the national weather services of the three countries. Our findings indicate that while reference observations consistently fall within the empirical distribution of the crowdsourced data, there are signs that intense precipitation events may be underestimated by crowdsourced data. Further investigation into the spatial variability of crowdsourced precipitation revealed significant deviations between measurements from locations as close as 1 to 5 km apart, with differences reaching up to 50% of the mean hourly precipitation in the area. This variation is partially attributed to the suboptimal siting and exposure of some Netatmo stations. However, it also provides an estimate of the inherent variability of precipitation over short distances.

In this study, we expand our research to include hourly temperature data collected by the same network of crowdsourced stations. We aim to address several research questions concerning the representativeness errors of citizen observations compared to WMO-compliant observations of hourly temperature. Specifically, we investigate whether systematic deviations exist between the two data sources, whether these deviations vary by season, and how accurately a normal distribution can model these deviations. Additionally, we will explore the spread of this normal distribution. Our research also seeks to quantify the typical variability in space of the deviations between crowdsourced data and reference observations, aiming to provide a more comprehensive understanding of representativeness errors.

How to cite: Båserud, L., Holm Høgemo, E., Seierstad, I. A., Lussana, C., Neuville, A., and Nipen, T. N.: Exploratory analysis of hourly observations of temperature measured by citizen observations over Scandinavia, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-515, https://doi.org/10.5194/ems2024-515, 2024.

10:00–10:15
|
EMS2024-664
|
Onsite presentation
Jouke H.S. de Baar and Gerard van der Schrier

Context. For monitoring, analysing and forecasting the impact of weather and climate change on society, we see an increasing need for high-quality, high-resolution gridded weather and climate services. The official networks of the National Meteorological and Hydrological Services (NMHS) sample part of the land-use excluding, for example, urban areas, which motivated us to include third-party data which does sample urban areas.  We have provided the results in real-time as well as historical gridded data services.

Approach. By including a statistical, simplified model of the observation error in our gridding approach (i.e. multi-fidelity regression Kriging), we did not include any advanced quality control (QC) of the crowd-sourced data. This might sound surprising in the context of weather and climate data,but is consistent with approaches in other scientific disciplines (e.g. marine engineering, aerospace engineering).

Results. In this study, we investigate the quantitative effect of state-of-the-art quality control on the accuracy of gridded services, by analysing hourly temperature observations in The Netherlands for the year 2023. Our results indicate that quality control of crowd-sourced weather data can potentially increase the accuracy of straightforward – and commonly used – nearest-neighbour approximation, but generally deteriorates the accuracy of more advanced gridded services. This finding indicates that using strict QC procedures to turn crowd-sourced data into a dataset with similar fidelity as the NMHS-sourced data is not the way to go. Therefore, we do indeed continue to blend first-party data, crowd-sourced data and land-use data without applying any advanced quality control to the crowd-sourced data.

Ecosystem. We emphasize that crowd-sourced weather data only improves services when it is blended with a high-quality network of NMHS data and possibly with land-use data. In other words, the availability of crowd-sourced data does not remove the need for high-quality NMHS observation networks. Therefore, we do not intend to present crowd-sourced data as a stand-alone product; rather, it lives in an integrated ecosystem.

How to cite: de Baar, J. H. S. and van der Schrier, G.: Gridding crowd-sourced weather data: is quality control required?, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-664, https://doi.org/10.5194/ems2024-664, 2024.

10:15–10:30
|
EMS2024-424
|
Onsite presentation
Nuria P. Plaza, Marcos Martinez-Roig, Cesar Azorin-Molina, Miguel Andres-Martin, Jorge Navarro, Jesus Fidel Gonzalez-Rouco, Elena Garcia-Bustamante, Jose A. Guijarro, Amir A. S. Piroz, Deliang Chen, Tim R. McVicar, Zengzhong Zeng, and Sergio M. Vicente-Serrano

Reliable near-surface wind speed  (NSWS; ~10 m above ground level) data is crucial for assessing the impact of wind changes on various socio-economic and environmental sectors, such as wind energy production or risk assessment. Meteorological stations provide local and realistic observations, but their spatial coverage is limited. Although this limitation can be overcome by using classical geostatistical interpolation methods, the reliability of their results is questionable, especially in regions with complex topography. This has motivated the use of reanalyses or dynamical downscaling of simulations as gridded NSWS products that contain local to regional wind data. However, their uncertainties in reproducing observed trends and their coarse resolutions raise doubts about their reliability for reproducing local NSWS. The use of classical interpolation products is even riskier in regions such as the Valencian Community (Eastern Iberian Peninsula, Spain), a region where both local winds (sea breezes) and extreme winds (westerlies “ponientes” or convective wind gusts “downbursts”) occur at local scales (~3km), impacting the tourism activities and wildfires propagation fatalities, among others.

Here, we propose a deep neural network based on partial convolutions as a more reliable spatial interpolation method, capable of learning the wind speed pattern across the Valencian Community observed in a dense observational network.  Observed NSWS from a citizen weather network of up to ~600 stations from the Valencian Association of Meteorology (AVAMET) were used after homogenization, resulting in a high-resolution (3-km) wind speed product. This offers a new tool for both climate and marine research in the framework of the ThinkInAzul project.

How to cite: Plaza, N. P., Martinez-Roig, M., Azorin-Molina, C., Andres-Martin, M., Navarro, J., Gonzalez-Rouco, J. F., Garcia-Bustamante, E., Guijarro, J. A., Piroz, A. A. S., Chen, D., McVicar, T. R., Zeng, Z., and Vicente-Serrano, S. M.: Applying citizen weather data and AI for developing a high-resolution wind speed monitor in the Valencia region (Spain), EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-424, https://doi.org/10.5194/ems2024-424, 2024.

Coffee break
Chairperson: Ole Einar Tveito
11:00–11:30
|
EMS2024-67
|
solicited
|
Onsite presentation
Francesco Cavalleri, Francesca Viterbo, Michele Brunetti, Riccardo Bonanno, Veronica Manara, Cristian Lussana, Matteo Lacavalla, and Maurizio Maugeri

Surface air temperature (t2m) data are essential for understanding climate dynamics and assessing the impacts of climate change. Reanalysis products, which combine observations with retrospective short-range weather forecasts, can provide consistent and comprehensive datasets. ERA5 represents the state-of-the-art in global reanalyses and supplies initial and boundary conditions for higher-resolution regional reanalyses designed to capture finer-scale atmospheric processes. However, these products require validation, especially in complex terrains like Italy.

This study analyzes the capability of different reanalysis products to reproduce t2m fields over Italy during the 1991-2020 period. The analyses encompass ERA5, ERA5-Land, the MEteorological Reanalysis Italian DAtaset (MERIDA), the Copernicus European Regional ReAnalysis (CERRA), and the Very High-Resolution dynamical downscaling of ERA5 REAnalysis over ITaly (VHR-REA_IT).

The validation we conduct pertains to both the spatial distribution of 30-year seasonal and annual normal values and the daily anomaly records. Each reanalysis is compared with observations projected onto its respective grid positions and elevations, overcoming any model bias resulting from an inaccurate representation of the real topography.

Key findings reveal that normal values in reanalyses closely match observational values, with deviations typically below 1°C. However, in the Alps, winter cold biases sometimes exceed 3°C and show a relation with the elevation. Similar deviations occur in the Apennines, Sicily, Sardinia. VHR-REA_IT, on the contrary, presents a warm summer bias of about +3 °C on average over the Po valley. Daily anomalies generally exhibit lower errors, with MERIDA showing the highest accuracy. Moreover, when aggregating daily anomalies to annual time scales, the errors in the anomaly records rapidly decrease to less than 0.5 °C.

The results of this study empower reanalysis users across multiple sectors to gain a more profound insight into the capabilities and constraints of different reanalysis products. This knowledge, in turn, enables them to make well-informed choices when incorporating these products into their research and practical applications.

How to cite: Cavalleri, F., Viterbo, F., Brunetti, M., Bonanno, R., Manara, V., Lussana, C., Lacavalla, M., and Maugeri, M.: Inter-comparison and validation of high-resolution surface air temperature reanalysis fields over Italy, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-67, https://doi.org/10.5194/ems2024-67, 2024.

11:30–11:45
|
EMS2024-1062
|
Onsite presentation
Daniel Hollis, Michael Kendon, and Emily Carlisle

The Met Office has been generating gridded climate datasets from UK in situ observations for over 25 years. The current version of these gridded data is known as the HadUK-Grid dataset – it includes monthly, seasonal, annual and long-term average values for 11 variables at 1km resolution. Three of these variables are also available at daily resolution. These datasets have a variety of applications including climate monitoring, model calibration and model validation.

Although the software, data sources, file formats, grid resolution and distribution methods have all changed since we first started producing these datasets, the techniques used to create the grids from in situ observations have remained largely the same. The observations are first detrended, either by converting the values to anomalies from the long-term average, or by using regression analysis to model the dependency on geospatial variables such as terrain elevation or proximity to the coast. The anomaly values or regression residuals are then interpolated to the target grid points using inverse-distance weighted averaging.

Here we present a review of the uncertainties in our gridded data. The aims of the analysis are threefold – to update the information we provide to users regarding the quality of our gridded products, to better understand the strengths and weaknesses of our gridding methods, and to investigate the efficiency of the quality control tests in our gridding software.

A leave-one-out cross-validation analysis has been carried out for the majority of our archive of gridded data. Time series graphs will be presented showing how the RMS error varies through the data record for each variable. These graphs show clear trends, seasonal cycles and outliers. Case studies have been used to understand the causes of some of these features.

Based on this analysis we have investigated aspects of our gridding techniques and quality control methods which have the potential to be improved. Preliminary results are presented which show how different gridding methods affect the cross-validation results and we draw some initial conclusions regarding the impact on gridding uncertainties.

How to cite: Hollis, D., Kendon, M., and Carlisle, E.: Investigating the uncertainty in gridded in situ climate data for the UK, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-1062, https://doi.org/10.5194/ems2024-1062, 2024.

11:45–12:00
|
EMS2024-770
|
Onsite presentation
Olav Ersland, Cristian Lussana, Ivar Ambjørn Seierstad, and Thomas Nils Nipen

We improve the representation of relative humidity fields over Scandinavia by combining outputs from our numerical weather model (MEPS) with real-time weather observations from citizen-managed stations. The method has to work operationally as a part of the automatic forecast at Yr.no, so both computational speed and accuracy are important evaluation metrics.  

 

The method of choice is Optimal Interpolation (OI), a Bayesian statistical analysis technique. In this approach, the model output serves as the prior information, which is updated by incorporating the more accurate and precise observations from real-time data. OI operates under the assumption that both model predictions and observations have associated uncertainties. By weighing these uncertainties, OI adjusts the model output towards the observations, aiming to minimise the overall error in the final data product. 

 

A challenge with real-time observations from citizen-managed stations is that they cannot always be trusted, for various reasons. Therefore, we need a method of quality assurance of the observations. A second challenge is to incorporate OI in a practical way in the codebase. For the first challenge we use the titanlib library, developed at the Norwegian Meteorological Institute (MET Norway). This library deals with automatic quality control of weather observations. For the second challenge we use the library gridpp, also developed at MET Norway. 

 

The quality of the reconstructed relative humidity fields is then evaluated by comparing them against a set of independent observations, which are known to be trusted and accurate. An example would be the weather station at Blindern, outside the headquarters of MET.

 

The software are openly available on:

 

Titanlib: https://github.com/metno/titanlib


GridPP: https://github.com/metno/gridpp

How to cite: Ersland, O., Lussana, C., Seierstad, I. A., and Nipen, T. N.: Spatial analysis of 2m relative humidity over Scandinavia, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-770, https://doi.org/10.5194/ems2024-770, 2024.

12:00–12:15
|
EMS2024-567
|
Online presentation
Anna-Maria Tilg, Maral Habibi, Barbara Chimani, and Marion Greilinger

Gridded data is highly appreciated for many climate related applications due to their spatial information compared to single station locations. The quality and additional value of a gridded dataset is depending on the method itself and on the amount of additional information, e.g. topographical data, used for the interpolation. Furthermore, there is an ongoing discussion on the implications of using gridded datasets of primary climate parameters like temperature or precipitation to derive gridded datasets of climate indices. Within the project SDGHUB (https://www.sdghub.at/), we are exploring the influence of different approaches on the final gridded climate index dataset. 

The focus is on the frequently used climate index of hot days (days with a maximum air temperature above or equal to 30 °C). In the first approach the hot days were computed from the gridded climate dataset SPARTACUS (Hiebl and Frei, 2016), while in the second approach the hot days were directly interpolated considering station values. SPARTACUS is a national gridded dataset of Austria, available on a daily basis with a 1 km-spatial resolution via the DataHub of GeoSphere Austria (https://data.hub.geosphere.at/). It covers the period from 1961 onwards and includes the parameters of maximum, minimum and mean air temperature and rain amount.  

For the second approach the efficacy of several geostatistical interpolation methods, including Kriging with External Drift (KED), Ordinary Kriging (OK), and Regression Kriging (RK), with a particular emphasis on their ability to represent the spatial variability of hot days was explored. As KED allows to include external variables to improve the interpolation results, we decided to proceed with that method. Furthermore, we tested different data transformation methods and variogram models to improve further the results.  As for SPARTACUS, the direct interpolation of hot days was done on a 1 km-scale for Austria.

To evaluate the performance of the two respective approaches a comparison with independent station data was done.

The presentation will include information about the two approaches, details about the direct interpolation of hot days and provide insights into the evaluation and differences of the two datasets.

 

References

Hiebl J, Frei C (2016) Daily temperature grids for Austria since 1961 – concept, creation and applicability. Theor Appl Climatol 124:161–178. https://doi.org/10.1007/s00704-015-1411-4

Acknowledgement

The project SDGHUB is funded by the Austrian Federal Ministry for Climate Action, Environment, Energy, Mobility and Technology (BMK) via the ICT of the Future Program - FFG No 892212.

How to cite: Tilg, A.-M., Habibi, M., Chimani, B., and Greilinger, M.: Gridded datasets of climate indices: comparison of two approaches, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-567, https://doi.org/10.5194/ems2024-567, 2024.

12:15–12:30
|
EMS2024-582
|
Onsite presentation
|
Fabian Lehner, Tatiana Klisho, Philipp Maier, and Herbert Formayer

High-resolution climate data provide great benefits for ecological and environmental climate impact studies where data need to represent local climate conditions. Especially in mountainous terrain, high resolution can accurately resolve mountain valleys and peaks and thus better represent topography-related aspects of meteorological variables. 

CHELSA-W5E5 v1.1 fulfills the need for daily data at a high spatial resolution of 30x30 arcseconds, but only covers the period 1979-2016 and does not provide certain daily variables necessary for more complex estimations of the water budget with potential evapotranspiration. Given that the current climate is usually referred to 1991-2020, a period already significantly influenced by anthropogenic climate change, the previously established standard period of 1961-1990 is still recommended for observing long-term climate development. This is particularly relevant in forestry where the main growth phases of current trees date back several decades.

 To extend the temporal coverage of CHELSA-W5E5 v1.1 from the initial span of 38 years (1979-2016) to 60 years (1961-2020) across Europe, we used ERA5-Land data set and applied quantile mapping with CHELSA on each grid cell. Variables such as wind speed and vapor pressure deficit are only provided on a climatological monthly basis from the similar data set CHELSA V2.1 and had to be newly generated for daily values. The result of this procedure is a 60-year long representation of Europe’s historical climate useful for applications in ecological and environmental sciences.

A strength of this data set is its temporal consistency as it is based on ERA5-Land, and high spatial resolution as it is based on CHELSA. Averaged climate indicators for the periods 1961-1990 and 1991-2020 are available for download (DOI 10.5281/zenodo.10623854). Validation shows that CHELSA lacks accurate temperature values in some mountainous locations, as shown with weather station data in the Alps. Thus, it is desirable for future versions of CHELSA to improve the accuracy of the vertical lapse rate in mountainous terrain for a more precise representation of valleys. 

How to cite: Lehner, F., Klisho, T., Maier, P., and Formayer, H.: European-wide climate indicators for 1961-2020 derived from daily data at 30x30 arc sec resolution, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-582, https://doi.org/10.5194/ems2024-582, 2024.

12:30–12:45
|
EMS2024-261
|
Onsite presentation
Francesco Uboldi, Elena Oberto, Umberto Pellegrini, Martina Lagasio, and Massimo Milelli

Observations from local networks of surface stations, managed by local (regional) weather services, are collected in a nation-wide network, providing hourly observations in near real time, accessible through the MyDewetra Database, for the Italian national Civil Protection Department. Such national network provides detailed information on meteorological fields near the surface; it is inhomogeneous though, and affected by Gross Errors and Large Representativeness Errors of various origin. CIMA has undertaken an effort to operationally and automatically check such information for quality, in order to provide a dataset suitable for various uses, such as model verification, data assimilation, meteorological analysis of past events and climatological characterization of recent years and present time.

An important part of the work is devoted to ensure quality of basic station metadata, such as geographical coordinates and orographic elevation above mean sea level. A station archive for each variable is created and monthly checked and updated.

The main checks on observed meteorological variables are based on the idea of spatial consistency. The Spatial Consistency Test (SCT) represents a fine check based on Optimal Interpolation and leave-one-out Cross Validation. This and other tests have been recently included in the open-source library Titan (https://github.com/metno/titanlib).

The Titan library test functions are routinely used at CIMA to provide quality control of precipitation raingauge observations. Such quality control checks for precipitation are presently supervised by a meteorologist, aware of the current weather state.

For temperature data the SCT has actually been rewritten to ensure a detailed analysis of check behaviour on the Italian dataset and is monthly used to provide reliable data for model forecast verification.

Plans are in place to:

  • extend the quality control to other meteorological variables, such as relative humidity;

  • test the inclusion of existing high resolution observational network managed by private amateur citizens;

  • test the impact of such quality-controlled local observations in data assimilation experiments for short-range forecasts;

  • compare SCT results with those obtained by Titan library functions, also with the possible scope of contributing to that open-source software test and development.

 

How to cite: Uboldi, F., Oberto, E., Pellegrini, U., Lagasio, M., and Milelli, M.: Automatic quality control for a nation-wide high-resolution network, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-261, https://doi.org/10.5194/ems2024-261, 2024.

12:45–13:00
|
EMS2024-667
|
Onsite presentation
Jouke H.S. de Baar, Else J.M. van den Besselaar, and Gerard van der Schrier

Context. Over the past decades, we have seen important developments in climatological observational gridded data sets. One development is the increasing number of stations included in the gridding process. For any gridded data set, one might ask oneself: (a) How sensitive is the gridded data set to the number of stations? (b) How accurate is the gridded data set? (c) What is the potential improvement of the gridded data set if we add more stations, and does this potential improvement depend on the locally existing station density, the local terrain, etc.?

Approach. We can learn from similar questions in engineering, specifically in the discipline of computational fluid dynamics (CFD). In this field of research, questions (a) and (b) are addressed in a standardized process of ‘verification’ (a) and ‘validation’ (b). The results of a verification and validation study are usually reported in terms of a grid convergence index (GCI) and (cross-)validation results. This approach was standardized by Roache in his work Verification and Validation in Computational Science and Engineering (Hermosa Publishers, 1998). We aim to apply the same procedure to gridded data sets. In this way, our approach is a way of interdisciplinary exchange of methodology.

In addition, we analyze the effects of adding stations by tracking their type of location, terrain, etc. (part of which might also be used as covariates during gridding) when we quantify their effect on the gridded data set. In this way, we can train a simple machine learning model of how sensitive the gridded data set is to inclusion of stations with specific characteristics. The aim is to identify which type of stations, based on their characteristics, would be the most valuable addition to the data set. The last step (c), we name ‘vacancy-profiling’.

Application. As a first study, we apply this approach to the E-OBS gridded data set for daily mean wind speed. This is an interesting data set, since the network density has increased significantly over the years, because the gridding process includes the use of covariate information which gives more details in the verification and validation processes.

How to cite: de Baar, J. H. S., van den Besselaar, E. J. M., and van der Schrier, G.: Verification, Validation and Vacancy-Profiling (VVV-P) as an assessment of quality and network growth potential of gridded climate data sets, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-667, https://doi.org/10.5194/ems2024-667, 2024.

Posters: Thu, 5 Sep, 18:00–19:30 | Poster area 'Galaria Paranimf'

Display time: Thu, 5 Sep, 13:30–Fri, 6 Sep, 16:00
Chairpersons: Gerard van der Schrier, Cristian Lussana, Ole Einar Tveito
GP46
|
EMS2024-364
Amir Dehkordi, Andreas Hoy, Kelli Marie Jama, and Heidi Tuhkanen

Urban heat islands (UHIs) pose significant challenges for cities of various sizes worldwide. Extreme summer temperatures strongly increased in frequency and intensity in all parts of Europe including Estonia, especially since the millennium. This development increased exposure and vulnerability of people and infrastructure to heat impacts. This study contributes to the Horizon Europe funded Regions4Climate project by investigating the characteristics of land surface temperatures (LST) in Pärnu city and county, located in Estonia (north-eastern Europe). We employ remote sensing data from satellites to identify areas experiencing the most intense heat in recent summers.

We studied available satellite data for the region via a web and literature research, focussing on the May-September timeframe. Considering available spectral bands and despite a comparably rare revisit frequency, Landsat-8 emerged the most applicable choice, offering comparably high-resolution data and an available data range of >10 years (2013-2023). Data are available for midday (around 12:30 local time), when the solar altitude is highest. We are additionally investigating into alternative remote sensing options are well (Modis, Ecostress). Furthermore, we obtained necessary ancillary data including city/county borders, building structure data, and open/green space maps.

To identify relevant days, we analyse both available Landsat-8 data (around 16 days revisit cycle) and local weather station records. We use days with maximum air temperatures exceeding 25°C at Pärnu station, combined with minimal cloud cover. The chosen days are processed in a GIS environment to derive LST values for Pärnu city/county. We employ an intuitive colour scale and legend to represent absolute temperatures alongside anomalies from a predefined baseline. Finally, we apply threshold values to identify local areas experiencing overheating, by classifying pixels exceeding specific LST thresholds or exhibiting significant temperature anomalies.

Our final results, tailored for integration into Pärnu city/county's urban planning processes, will be developed through reiterative consultation to ensure relevance. This information will inform strategies for mitigating UHI effects, improving thermal comfort, and promoting equitable and just climate adaptation. These results will serve as the foundation for a digital decision-making tool by Pärnu City, empowering stakeholders in urban development and land-use planning.

How to cite: Dehkordi, A., Hoy, A., Jama, K. M., and Tuhkanen, H.: Mapping urban heat islands in Pärnu/Estonia by leveraging satellite data, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-364, https://doi.org/10.5194/ems2024-364, 2024.

GP47
|
EMS2024-421
Kinga Bokros, Beatrix Izsák, Mónika Lakatos, and Olivér Szentes

This study investigates the refinement of short-term precipitation interpolation, focusing on regions prone to intense, localized thunderstorms like supercells. Traditional meteorological stations often miss these events due to their limited spatial coverage, leaving significant precipitation accumulations unrecorded, leading to incomplete representations and errors in interpolation. To mitigate these interpolation errors, auxiliary data sources such as satellite imagery, weather forecasts, and radar measurements are crucial for refining interpolation processes and enhancing our understanding of precipitation patterns. In our research we integrate radar background information into the MISH (Meteorological Interpolation based on Surface Homogenized Data) method as documented in the studies authored by Szentimrey and Bihari (2007, 2014).

Using the MISH method, we processed 10-minute precipitation datasets with and without 10-minute radar-derived background information across the study area building on our prior research (Bokros et al., 2023). We examined how MISH handles radar anomalies, including errors, missing data, and spurious measurements from unintended reflections.

Statistical techniques were employed to elucidate the extent to which the inclusion of radar-derived data enhanced the quality of interpolation. Furthermore, our investigation aimed to quantify the robustness of the relationship between interpolations conducted with radar-derived background information and those performed without such supplementary data.

Integrating radar-derived background information into interpolation processes is essential for improving societal resilience, agricultural productivity, and hazard forecasting accuracy in areas susceptible to intense thunderstorms. This improvement can lead to better preparedness and mitigation strategies.

The research was conducted within the framework of the Széchenyi Plan Plus program, with support from the RRF 2.3.1 21 2022 00008 project.

How to cite: Bokros, K., Izsák, B., Lakatos, M., and Szentes, O.: Enhancing precision in short-term precipitation interpolation with radar background: unraveling case studies through 10-minute radar data analysis, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-421, https://doi.org/10.5194/ems2024-421, 2024.

GP48
|
EMS2024-926
On the assessment of temperature and precipitation extremes in Central Europe in gridded observation datasets and reanalysis 
(withdrawn)
Agnieszka Wypych, Pavel Zahradníček, Agnieszka Sulikowska, Petr Štěpánek, and Filip Oskar Łanecki
GP49
|
EMS2024-745
Mercè Barnolas, Antoni Barrera-Escoda, Marc J. Prohom, and Aleix Serra

As we acknowledge the increasing frequency, broader distribution, and heightened intensity of extreme weather events due to climate change, it becomes crucial to gain a deeper understanding of their behaviour. These extreme events have significant societal impacts, requiring better knowledge and tools for assessment.

The Climate Atlas of Extremes in Catalonia, covering the period from 1991 to 2020, serves as a vital reference tool. It provides valuable information to government bodies, regional authorities, businesses, and citizens about the specific characteristics of climate extremes in Catalonia over the past three decades.

In this study, we present the methodology behind the creation of the Catalonia Climate Extremes Atlas. Our goal is to provide a comprehensive analysis of extreme climate events in the Catalonia territory. We will present the key steps involved in this process.

  • Data Preparation: we collect and preprocess daily climate data, including mean, maximum, and minimum temperatures, as well as precipitation records. Rigorous quality control procedures are applied to ensure data accuracy. Homogeneity analysis: artificial biases in the data series are addressed using the ACMANTv5 method. This step ensures that the dataset accurately represents the true climate conditions. To obtain descriptive indices to evaluate extreme events, we utilize the CLIMPACT tool (https://climpact-sci.org/indices/). These indices were defined by the joint CCl/CLIVAR/JCOMM Expert Team (ET) on Climate Change Detection and Indices (ETCCDI) and provide valuable insights into extreme weather phenomena.
  • Digital cartography: we interpolate extreme normals to create georeferenced climate data on a regular grid. The resulting high-resolution digital cartography includes precipitation indices mapped at a 1-kilometer resolution, and temperature indices mapped at a 100-meter resolution. Different time scales (monthly, seasonal, and annual) are considered based on relevance. The extreme normals will be freely accessible through the SIG portal of the Government of Catalonia (https://sig.gencat.cat/visors/hipermapa.html).

Ongoing research involves trend calculations and statistical approaches. We will compare different periods of extremes and assess changes. Additionally, we plan to compare our findings with model-projected changes in the same extremes.

How to cite: Barnolas, M., Barrera-Escoda, A., Prohom, M. J., and Serra, A.: Climate Atlas of extremes in Catalonia (1991-2020) , EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-745, https://doi.org/10.5194/ems2024-745, 2024.

GP50
|
EMS2024-436
|
Cristian Lussana, Thomas N. Nipen, Ivar A. Seierstad, Line Båserud, and Amélie Neuville

MET Nordic is a dataset created by the Norwegian Meteorological Institute (MET Norway), providing  near-surface variables for Scandinavia, Finland, and the Baltic countries with a resolution of 1 km. This dataset is available through two distinct production streams: MET Nordic RT designed for real-time data provision to applications, and Met Nordic LTC aimed at supporting applications that require consistent long-term data.The dataset includes the following near-surface variables: temperature at two metres, precipitation, sea-level air pressure, relative humidity, wind speed and direction, solar global radiation, long-wave downwelling radiation, and cloud area fraction.

MET Nordic RT provides updated products every hour with a 20 minute delay, and it has an archive that goes back to 2012. The dataset consists of post-processed products that (a) describe the current and past weather (analyses), and (b) give our best estimate of the weather in the future (forecasts). The products integrate output from MetCoOp Ensemble Prediction System (MEPS) as well as measurements from various observational sources, including crowdsourced weather stations. These products are deterministic, that is they contain only a single realisation of the weather. The forecast product forms the basis for the forecasts on Yr (https://www.yr.no). Both analyses and forecasts are freely available for download in NetCDF format.

For temperature and precipitation, the model output is combined with unconventional observations, such as data from citizen weather stations. Their inclusion shows a clear improvement to the accuracy of short-term temperature forecasts, especially in areas where existing professional stations are sparse. In this study, we will summarise the results obtained with the post-processing and we will share the main lessons learned, which can also be useful for systems that want to use these observations for data assimilation.

The MET Nordic LTC is currently in an experimental phase and undergoing significant modifications, more than the RT stream. The primary goal of LTC is to extend the temporal coverage of the variables provided by RT, possibly back to 1961, with an emphasis on achieving time consistency—a feature not prioritised in the RT dataset. The approach involves applying post-processing techniques similar to those used for RT but with notable distinctions. Firstly, we utilise a reanalysis dataset, such as the 3-km Norwegian Reanalysis (NORA3),  rather than outputs from numerical weather predictions. Secondly, we are developing methods to incorporate observational data in a way that maintains the time consistency of the dataset. This may necessitate using only a selection of high-quality observations that are available over extended periods.

How to cite: Lussana, C., Nipen, T. N., Seierstad, I. A., Båserud, L., and Neuville, A.: MET Nordic dataset: post-processing of model output near-surface fields, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-436, https://doi.org/10.5194/ems2024-436, 2024.

GP51
|
EMS2024-455
Tatiana Klisho, Fabian Lehner, Johannes Laimighofer, and Herbert Formayer

High-resolution gridded climate data derived from in-situ observations play a crucial role in global and regional climatology. The data are a valuable input for further climate impact studies, especially in ecological and energy modeling, and can be subsequently used for the wind power potential analysis.
Moreover, policymakers can make informed decisions based on accurate climate information derived from these datasets, enhancing the effectiveness of climate-related policies and interventions.

This study explores methods to enhance mean wind speed interpolation techniques over complex topography, resulting in the creation of a high-resolution (250x250m) gridded daily mean wind speed dataset for Austria spanning from 1961 to 2023. A two-step approach is tested, wherein climatologies for each month are computed using the best-performing interpolation technique. Subsequently, the optimal interpolation method would be employed to interpolate the model residuals (in case of machine learning (ML) superiority). In the subsequent stage, the same interpolation approach is applied to interpolate daily residuals to the monthly climatologies. Combining both fields produces the final gridded daily mean wind speed dataset.

Various spatial interpolation approaches, including Inverse Distance Weighting (IDW), 3D IDW (an Euclidian method, which accounts for elevation differences), Thin Plate Splines (tp_spline),  Local Polynomial Interpolation (loc_poly), and Kriging approaches (OK, OK_trend, UK, UK_poly) are evaluated. Additionally, the results would be compared to regression models, such as Ridge Regression (RR), Random Forest Regression (RFR), Decision Tree Regression (DTR), and Gradient Boosting Regression (GBR),  as well as ensembles of these models by combining different regressors in a pipeline. Each selected regression model is trained independently on the training data, and the final prediction is obtained by averaging the individual model predictions. Each model has the same set of predictors and is set up for each month separately.

Additionally, qualitative comparisons will be conducted with other high-resolution gridded datasets. The dataset will be made publicly available for download.

How to cite: Klisho, T., Lehner, F., Laimighofer, J., and Formayer, H.: High-resolution (250x250m) gridded daily mean wind speed dataset for Austria spanning from 1961 to 2023., EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-455, https://doi.org/10.5194/ems2024-455, 2024.

GP52
|
EMS2024-411
Anna Rohrböck, Johann Hiebl, Francesco Isotta, and Anna-Maria Tilg

Access to spatially comprehensive information of climate variables spanning multiple decades is crucial for various applications, including ecosystem modelling, climate monitoring, and the evaluation of climate models. However, existing observational temperature datasets for climate monitoring in Austria often exhibit limitations in either temporal extension or spatial comprehensiveness. The HISTALP dataset provides homogenized monthly observation series of air temperature for the greater Alpine region, with records dating back to the 19th or even 18th century, but with limited spatial coverage. Conversely, the Austrian spatial climate observation dataset SPARTACUS offers daily-resolved high-resolution spatial grids of air temperature but is restricted to the period after 1961.

This study aimed to address these limitations by constructing a temporally consistent grid dataset of monthly air temperature for Austria, covering the period from 1781 to 2020. Combining the strengths of both the HISTALP and SPARTACUS datasets, we applied a statistical reconstruction technique called „Reduced Space Optimal Interpolation“ (RSOI), involving a Principal Component Analysis (PCA) and Optimal Interpolation (OI). This methodology allowed us to merge long-term, continuous, and homogeneous mean air temperature series from HISTALP with the high-resolution grids derived from SPARTACUS. A further advantage of this method is the possibility to reconstruct the temperature evolution during the early instrumental period even in regions where direct observations were lacking at that time.

The resulting grid dataset, named SOCRATES (Spatial Reconstruction of Climate in Austria Combining SPARTACUS and HISTALP Datasets), provides monthly grids of air temperature anomalies back to 1781 with respect to the reference period 1961-1990. These anomaly grids allow the derivation of absolute temperature grids as well as seasonal and annual aggregates. Beside details on the method, we will present some results of the evaluation. The comparison of the reconstruction with observations by applying a leave-one-out cross validation showed a bias close to zero across all reconstruction periods and seasons. The mean absolute error (MAE) decreased over the considered reconstruction periods, i.e. from 0.35 K for 1781-2020 to 0.22 K for 1951-2020, regarding full years. Furthermore, the MAE showed a seasonal dependence with the lowest errors in summer and highest errors in winter. The applicability of the reconstructions further depends on the regions within Austria. In low-lying parts of northern and eastern Austria, the results demonstrated high reconstruction skill, even for the earliest reconstruction period, while for southern Austria and high elevations it is recommended to consider reconstruction periods starting in 1851 or later. Overall, the results emphasized the capability of SOCRATES in achieving high temporal consistency, which is essential for its use in the long-term spatial climate monitoring in Austria.

How to cite: Rohrböck, A., Hiebl, J., Isotta, F., and Tilg, A.-M.: Reconstruction of long-term consistent air temperature grids for Austria back to 1781, EMS Annual Meeting 2024, Barcelona, Spain, 1–6 Sep 2024, EMS2024-411, https://doi.org/10.5194/ems2024-411, 2024.