EOS4.8 | BUGS: Blunders, Unexpected Glitches, and Surprises
Orals |
Tue, 16:15
Tue, 14:00
EDI
BUGS: Blunders, Unexpected Glitches, and Surprises
Co-organized by BG0/EMRP1/ESSI4/GD10/GI1/GI6/GM11/GMPV1/PS0/SM2/SSS11/ST4
Convener: Ulrike ProskeECSECS | Co-conveners: Laetitia Le Pourhiet, Daniel KlotzECSECS, Nobuaki Fuji, Jonas PyschikECSECS
Orals
| Tue, 29 Apr, 16:15–18:00 (CEST)
 
Room -2.33
Posters on site
| Attendance Tue, 29 Apr, 14:00–15:45 (CEST) | Display Tue, 29 Apr, 14:00–18:00
 
Hall X2
Orals |
Tue, 16:15
Tue, 14:00

Orals: Tue, 29 Apr | Room -2.33

The oral presentations are given in a hybrid format supported by a Zoom meeting featuring on-site and virtual presentations. The button to access the Zoom meeting appears just before the time block starts.
Chairpersons: Ulrike Proske, Jonas Pyschik, Daniel Klotz
16:15–16:20
Experimental work and observations
16:20–16:30
|
EGU25-1660
|
On-site presentation
Nick van de Giesen and John Selker

In the early 1990's, fractals and chaos were hot. In 1987, James Gleick had published "Chaos: Making a New Science", popularizing non-linear dynamics. Hydrologists played an important role in the development of fractal theory. Hurst had discovered that sequences of dry and wet years for the Nile showed very long memory effects. Instead of the chance of a dry year following a dry year being 50%, Hurst found that there were surprisingly many long series of dry or wet years. Seven fat years, seven lean years, as it is noted in Genesis. Scott Tyler found fractals in soils ("Fractal processes in soil water retention"). At Cornell, where we were at the time, David Turcotte described "Fractals in geology and geophysics". A few years later, Ignacio Rodríguez-Iturbe and Andrea Rinaldo would publish "Fractal River Basins: Chance and Self-Organization". In short, fractals were exciting scientific gold.

A fractal is not just an obscure mathematical object but something that can actually be found everywhere in nature. Early on, a paper was published in Nature with the title "Fractal viscous fingering in clay slurries" by Van Damme, Obrecht, Levitz, Gatineau, and Laroche. They "only" did an experiment on a fractal embedded in 2D; we should be able to do one better and find the fractal dimension of the surface of cracking clay embedded in 3D. So out we went, collected some clay, mixed it with water in a cement mixer, siliconed together a shallow "aquarium", and poured in the slurry. To observe the cracking of the drying slurry, a video camera was mounted above the experiment, looking down and taking time-lapse images. To access the views from the sides, mirrors were installed at 45 degrees at each of the four sides. Lights made sure the camera captured high quality images. The whole set-up was enclosed in a frame with dark cloth to ensure that lighting was always the same.  We already had some box-counting code ready to calculate the fractal dimension of the surface, called the Minkowski–Bouligand dimension. One variable needed some extra attention, namely the boundary between the clay slurry and the glass sides. If the clay would cling to the sides, it would be difficult to understand the effects that this boundary condition had on the outcome of the experiment. Moreover, the cracks may not have become visible in the mirrors when the sides were covered with mud. So, instead, it was decided to make the sides hydrophobic with some mineral oil. This ensured that when the clay would start to shrink, it would come loose from the sides. Now, all we had to do was wait. It took only a week or so before the consolidated slurry started to shrink and to come loose from the sides. After that, the clay continued shrink for many weeks. This is how we learned that the fractal dimension of a shrinking brick of clay is (very close) to 3.0. 

How to cite: van de Giesen, N. and Selker, J.: The Minkowski–Bouligand dimension of a clay brick, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-1660, https://doi.org/10.5194/egusphere-egu25-1660, 2025.

16:30–16:40
|
EGU25-11357
|
On-site presentation
Peter Manshausen, Anna Tippett, Edward Gryspeerdt, and Philip Stier

The idea of invisible ship tracks for the study of aerosol-cloud interactions sounds promising: We have been studying the effects of aerosols on clouds for many years, among others by investigating the bright lines of clouds left in low marine clouds by ships. However, only a small fraction of ships leaves behind visible tracks. This means we can only study aerosol-cloud interactions under certain meteorological conditions, biasing our understanding. Instead, by studying all clouds polluted by ships ('invisible ship tracks') with a methodology we developed, we should be able to get a full picture of aerosol-cloud interactions. A number of interesting and impactful results have come out of this research, along with several setbacks and corrections to initial results. Here, we examine them in order, showing how correcting for one identified bias can introduce two new ones. Unexpected glitches arise from sources as varied as: choices regarding ship track definition, retrieval geometry, specific weather systems biasing results, and mathematical subtleties. What can we conclude after four years of progress on this methodology? While some results still stand, others had to be significantly corrected. This makes us see invisible ship tracks as an example of research that is closer to a method of 'tinkering' than to a 'magnificent discovery'.

How to cite: Manshausen, P., Tippett, A., Gryspeerdt, E., and Stier, P.: Two steps forward, one step back: four years of progress and setbacks on invisible ship tracks, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-11357, https://doi.org/10.5194/egusphere-egu25-11357, 2025.

Modelling
16:40–16:50
|
EGU25-10615
|
solicited
|
On-site presentation
Jan Seibert, Franziska Clerc-Schwarzenbach, Ilja van Meerveld, and Marc Vis

Failures are only common in science, and hydrological modelling is no exception. However, we modellers usually do not like to talk about our mistakes or our overly optimistic expectations and, thus, “negative” results usually do not get published. While there are examples where model failures indicated issues with the observational data, in this presentation the focus is on modelling studies, where some more (realistic) thinking could have helped to avoid disappointments. Examples include the unnecessary comparison of numerically identical model variants, naively optimistic expectations about increasing the physical basis of bucket-type models and excessively hopeful assumptions about the value of data.

How to cite: Seibert, J., Clerc-Schwarzenbach, F., van Meerveld, I., and Vis, M.: Think twice – pitfalls in hydrological modelling, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-10615, https://doi.org/10.5194/egusphere-egu25-10615, 2025.

16:50–17:00
|
EGU25-1620
|
On-site presentation
Svenja Fischer

Statistical models are a frequently used tool in hydrology, especially when it comes to estimating design floods, i.e. flood events that used to design flood protection systems or reservoirs. The often complex hydrological data, which are affected by e.g. missing values, extremes or time-varying processes, require sophisticated statistical models that take these challenges into account. As a scientist, developing such models can be a lot of fun and provide interesting insights. After months of thinking about the best model under certain statistical assumptions, proving asymptotic theorems and testing the model with synthetic data, you are happy and proud to have developed a new model. This model will hopefully be widely used in future research. The next step is to apply the model to a large real data set. The results look good on average. The results will be shared with practitioners, because of course you want the model to be useful for science and practice. And then: the phone call. You are told that your results are not plausible for a certain catchment area. And in general, the new model is not needed in practice because there is an established model. This example describes such a case and discusses ways of dealing with it. It is intended to illustrate the importance of communication between science and practice and a general understanding between both sides.

How to cite: Fischer, S.: When practical considerations impact your scientific model, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-1620, https://doi.org/10.5194/egusphere-egu25-1620, 2025.

17:00–17:10
|
EGU25-5035
|
On-site presentation
Stefan Hergarten and Jörg Robl

In 2018, we found exciting new results in landform evolution modeling by coupling the two simplest models of fluvial erosion and hillslope processes. While the stream-power incision model is the simplest model for detachment-limited fluvial erosion, the diffusion equation is the simplest description of hillslope processes at long timescales. Both processes were added at each grid cell without an explicit separation between channels and hillslopes because fluvial erosion automatically becomes dominant at large catchment sizes and negligible at small catchment sizes.

We found that increasing diffusion reduces the relief at small scales (individual hillslopes), but even increases the large-scale relief (entire catchments). As an immediate effect, the hillslopes become less steep. In turn, however, we observed that the network of the clearly incised valleys, which indicates dominance of fluvial erosion over diffusion, became smaller. So a smaller set of fluvially dominated grid cells had to erode the material entering from the hillslopes. To maintain a morphological equilibrium with a given uplift rate, the rivers had to steepen over long time. This steepening even overcompensated the immediate decrease in relief of the hillslopes.

This result was counterintuitive at first, but we were happy to find a reasonable explanation. So we even prepared a short manuscript for a prestigious  journal. We just did not submit it because we wanted to explain the effect quantitatively from the physical parameters of the model. From these theoretical considerations, we found that our numerical results did not only depend on the model parameters, but also on the spatial resolution of the model and noticed that this scaling problem was already discussed in a few published studies. Beyond the scaling problem, we also realized that applying the concept of detachment-limited fluvial erosion to the sediment brought from the hillslopes into the rivers is quite unrealistic. A later study including fluvial sediment transport and a model for hillslope processes that avoids scaling problems did not predict any increase in large-scale relief. So we finally realized that our original findings were mainly the result of a specific combination of models that should not be coupled this way and are not  as relevant for landform evolution as we thought.

This example illustrates many of the pitfalls of numerical modeling beyond purely technical issues. In particular, combining models that are widely used and make sense individually may still cause unexpected problems.

 

How to cite: Hergarten, S. and Robl, J.: Landslides and hillslope erosion increase relief, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-5035, https://doi.org/10.5194/egusphere-egu25-5035, 2025.

17:10–17:20
|
EGU25-10285
|
ECS
|
On-site presentation
Felix Jäger, Petra Sieber, Isla Simpson, David Lawrence, Peter Lawrence, and Sonia I. Seneviratne

Historically, large areas across the globe have been affected by deforestation or irrigation expansion. The replacement of forests with agricultural land and increased water availability in irrigated croplands altered the land’s surface properties, leading to influences of biogeophysical changes on near-surface temperature. From limited observations and mostly idealized simulations, we know that sufficiently large alterations of land surface properties can theoretically lead to systematic temperature and precipitation changes outside and even far from the altered areas. Not only the advection of temperature anomalies, but also changes in circulation and ocean feedbacks have been shown to be potential drivers of such non-local responses in single and multi-model studies.

We tested the robustness of non-local temperature signals to internal variability in the fully coupled Community Earth System Model 2 (CESM2) simulations of the historical period (1850 – 2014) with all forcings vs. all-but-land-use-change forcings. Doing so, we first found seemingly robust non-local temperature effects of land use change on the global and regional scale. But when accounting for the sampling of internal variability in the model using a large initial condition ensemble, the global scale signal was found to be indistinguishable from noise. Only regionally in some hotspots, we found robust and historically important non-local temperature signals. Through increasingly rigorous analysis, we reached a partly negative and unexpected but important finding, which may have implications for future assessments of comparably weak or spatially heterogeneous forcings to the Earth system.

How to cite: Jäger, F., Sieber, P., Simpson, I., Lawrence, D., Lawrence, P., and Seneviratne, S. I.: How robust are modeled non-local temperature effects of historical land use changes really?, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-10285, https://doi.org/10.5194/egusphere-egu25-10285, 2025.

17:20–17:30
|
EGU25-5951
|
Highlight
|
On-site presentation
Lukas Brunner, Maximilian Meindl, and Aiko Voigt

"Doesn't this look a bit strange?" 

It began with an innocent question during one of our Master's colloquia. And it could have ended there. "We were just following an approach from the literature". And who could argue against following the literature?

But it bugged me. During a long train ride, I began to think about the issue again. 10 hours and many papers later, I was only more confused: was it really that obvious, and why had no one picked up on it before? But sometimes the most obvious things are the most wicked, and after a few conversations with knowledgeable colleagues, I was sure we were in for an unexpected surprise. 

A commonly used approach to defining heat extremes is as exceedances of percentile-based thresholds that follow the seasonal cycle. Such relative extremes are then expected to be evenly distributed throughout the year. For example, over the 30-year period 1961-1990, we expect three (or 10%) of January 1s to exceed a 90th percentile threshold defined for the same period - and the same for all other days of the year. In a recent study, we show that there are many cases where this does not hold, not even close (Brunner and Voigt 2024).

Here, we tell the story of how this blunder spread in the literature out of the desire to improve extreme thresholds. We show that seemingly innocent changes can sometimes have unintended consequences and that taking the time to check the obvious can help avoid mistakes in science. 

 

Brunner L. and Voigt A. (2024): Pitfalls in diagnosing temperature extremes, Nature Communications, https://doi.org/10.1038/s41467-024-46349-x

How to cite: Brunner, L., Meindl, M., and Voigt, A.: Improving extreme temperature definitions until they are wrong, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-5951, https://doi.org/10.5194/egusphere-egu25-5951, 2025.

17:30–17:40
|
EGU25-5091
|
ECS
|
On-site presentation
Guillemette Legrand

In this presentation, I will discuss my research into the simple climate model Hector, which calculates temperature change based on the impact of various climate scenarios. More specifically, I will discuss how an artistic-led approach through (un)voluntary-caused computational bugs can help document the model's logic and socio-political implications. I will describe methods for collective 'debugging' to produce transdisciplinary knowledge (beyond solely scientific inquiry) to foster conversation about the potential and limits of current climate infrastructure to foster concrete climate actions. This research investigates the field of climate science through artistic practice, software and infrastructure studies, and participatory methods. To expand on the role of bugs in my investigation, I will elaborate on concrete examples of differences in perception of 'error' in the fields of arts and science, looking at case studies where mistakes or glitches have been valorised and mobilised through artistic practice to grapple with, appropriate, and/or repurpose scientific instruments.

How to cite: Legrand, G.: (Re)(De)bugging tragedies with Hector, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-5091, https://doi.org/10.5194/egusphere-egu25-5091, 2025.

Fieldwork
17:40–17:50
|
EGU25-15457
|
On-site presentation
Markus Weiler

In hydrology we measure and follow the water. What if there is too much or too little? It happens a lot. As a field hydrologist, I frequently have to determine the location of a measurement, the time to take the measurement, the location to set up a field experiment, or the amount of a tracer to inject to study a hydrological system. However, this is a very bumpy road, as variability is often not in favor of my decisions because the distribution is wider than expected, bimodal instead of unimodal, or the probability of an event is theoretically small, but still an extreme event occurs during our experiment. I will showcase some examples to demonstrate what I mean and what I experienced, as well as how frequently the PhD students or Postdocs have suffered as a result of my decisions or of the unexpected variability: Climatic variability resulted in a winter without snow, just as new sensors were already deployed. Or the winter snowpack was extremely high, preventing any work at high altitudes in the Alps until mid of July, thereby reducing our field season by half. An ecohydological study to observe the effects of drought in a forest with a rainout shelter was ineffective because it occurred during an extremely dry year, making the control just as dry as our drought treatment. The automatic water sampler was set-up to collect stream water samples, but it was washed away four weeks later by the 50-year flood. The calculated amount of artificial tracer was either way too low, because the transit times of the system were much longer than expected, or it was far too high, resulting in colored streams or samples that had to be diluted by a factor of 100 due to much faster transit times Finally, and most expensively, we installed many trenches along forest roads to measure subsurface stormflow but after three years, we abandoned the measurements because we never measured a drop of water coming out of the trenches, as the bedrock permeability was much higher due to many high permeable fissures that prevented the formation of subsurface stormflow.  These experiments or observations failed because of unexpected variability in input, system properties or a lack of technical variability in the equipment. I will reflect on residual risk of failure in fieldwork related to that crux and discus approaches to reduce this risk.

How to cite: Weiler, M.: The crux with variability: too much or too little, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-15457, https://doi.org/10.5194/egusphere-egu25-15457, 2025.

17:50–18:00
|
EGU25-18185
|
On-site presentation
Tim van Emmerik and the WUR-HWM River Plastic Team

Rivers play an important role in the global distribution of plastic pollution throughout the geosphere. Quantifying and understanding river plastic pollution is still an emerging field, which has advanced considerably thanks to broad efforts from science, practice, and society. Much progress in this field has been achieved through learning from failures, negative results, and unexpected outcomes. In this presentation we will provide several examples of serendipity and stupidity that has led to new insights, theories, methods, and completely new research lines. We will share what we learned from rivers flowing in the wrong direction, sensors that disappear, equipment blocked by invasive plants, and dealing with suspicious local authorities. Pushing the science sometimes requires an opportunistic approach, embracing surprises and chaos you may face along the way.

How to cite: van Emmerik, T. and the WUR-HWM River Plastic Team: Advancing river plastic research through serendipity and stupidity, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-18185, https://doi.org/10.5194/egusphere-egu25-18185, 2025.

Posters on site: Tue, 29 Apr, 14:00–15:45 | Hall X2

The posters scheduled for on-site presentation are only visible in the poster hall in Vienna. If authors uploaded their presentation files, these files are linked from the abstracts below.
Display time: Tue, 29 Apr, 14:00–18:00
Chairpersons: Laetitia Le Pourhiet, Nobuaki Fuji
Communication
X2.28
|
EGU25-18981
|
ECS
Stefan Gaillard

Addressing positive publication bias and clearing out the file drawer has been at the core of the Journal of Trial and Error since its conception. Publishing the trial-and-error components of science is advantageous in numerous ways, as already pointed out in the description of this panel: errors can lead to unexpected insights and warning others about dead ends can prevent wasted time and other resources. Besides those advantages, publishing negative and null results facilitates conducting robust meta-analyses. In addition, predictive machine learning models benefit from training on data from all types of research rather than just data from studies with positive, exciting results; already researchers are reporting that models trained on published data are overly optimistic.

Besides publishing negative and null results as well as methodological failures, the Journal of Trial and Error couples each published study with a reflection article. The purpose of these reflection articles is to have a philosopher, sociologist or domain expert reflect on what exactly went wrong. This helps contextualize the failure, helping to pinpoint the systematic factors at play as well as helping the authors and other scientists to draw lessons from the reported research struggles which can be applied to improve future research.

Publishing failure brings with it some practical challenges: convincing authors to submit manuscripts detailing their trial-and-error; instructing peer reviewers on how to conduct peer review for the types of articles; differentiating between interesting … and uninformative, sloppy science; and determining the best formats to publish various failure-related outcomes in. Authors are still hesitant to publish their research struggles due to reputational concerns and time constraints. In addition, authors often fear that peer reviewers will be more critical of articles describing research failures compared to articles reporting positive results. To counteract this (perceived) tendency of peer reviewers to be more critical of research without positive results, we provide specific instructions to peer reviewers to only assess the quality of the study without taking into account the outcome. This then also ensures that we only publish research that adheres to the standards of the field rather than sloppy science. Whether submitted research provides informative insights is assed by the editor-in-chief and the handling editor.

Finally, we are constantly evaluating and innovating the types of articles we publish. Various types of errors and failures benefit from differing ways of reporting. For example, recently we introduced serendipity anecdotes, a format where scientists can anecdotally describe instances serendipity which occurred during their research. This format allows researchers to focus on the conditions which allowed for the serendipitous discovery rather than the research itself.    

How to cite: Gaillard, S.: Publishing BUGS: Insights from the Journal of Trial and Error, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-18981, https://doi.org/10.5194/egusphere-egu25-18981, 2025.

X2.29
|
EGU25-20866
|
ECS
Jan Gärtner, Ulrike Proske, Nils Brüggemann, Oliver Gutjahr, Helmuth Haak, Dian Putrasahan, and Karl-Hermann Wieners

Climate models are not only numerical representations of scientific understanding but also human-written software, inherently subject to coding errors. While these errors may appear minor, they can have significant and unforeseen effects on the outcomes of complex, coupled models. Despite existing robust testing and documentation practices in many modeling centers, bugs broader implications are underexplored in the climate science literature.

We investigate a sea ice bug in the coupled atmosphere-ocean-sea ice model ICON, tracing its origin, effects, and implications. The bug stemmed from an incorrectly set logical flag, which caused the ocean to bypass friction from sea ice, leading to unrealistic surface velocities, especially in the presence of ocean eddies. We introduce a concise and visual approach to communicating bugs and conceptualize this case as part of a novel class of resolution-dependent bugs - long-standing bugs that emerge during the transition to high-resolution models, where kilometer-scale features are resolved.

By documenting this case, we highlight the broader relevance of addressing bugs and advocate for universal adoption of transparent bug documentation practices. This documentation complements the robust workflows already employed by many modeling centers and ensures lessons from individual cases benefit the wider climate modeling community.

How to cite: Gärtner, J., Proske, U., Brüggemann, N., Gutjahr, O., Haak, H., Putrasahan, D., and Wieners, K.-H.: A case for open communication of bugs in climate models, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-20866, https://doi.org/10.5194/egusphere-egu25-20866, 2025.

Fieldwork
X2.30
|
EGU25-17676
|
ECS
Rahel Hauk, Adriaan J. Teuling, Tim H.M. van Emmerik, and Martine van der Ploeg

Plastic pollution is a global issue, across all environmental compartments. Rivers connect the terrestrial with the marine environment, and they transport various materials, among these plastic pollution. Rivers not only transport plastic, but also accumulate and store it, especially on riverbanks. In fact, plastic deposition and accumulation on riverbanks is a common occurrence. However, our understanding of why plastic is deposited on a certain riverbank is rather limited. Riverbanks along all major Dutch rivers have been monitored for plastic and other litter twice a year by citizen scientists, in some locations since 2018. This provides an extensive dataset on plastic accumulation, and we used these data with the aim of understanding the factors determining plastic concentration/accumulation variability over time and space. We tested multiple riverbank characteristics, such as vegetation, riverbank slope, population density, etc., hypothesized to be related to plastic litter. After having exhausted a long list of auxiliary data and analysis strategies, we found no significant results. Ultimately, we had a close look at ten consistent hotspots of macroplastic litter, along the Meuse, and Waal river. And once again, they seem to have nothing in common. But, there is a pattern, because some riverbanks have consistently very high densities of plastic litter so it does not seem completely random. We have been looking to explain spatial variability, whereas we might have to look at temporal consistency, and we shall not give up our efforts to bring order to this chaos.

How to cite: Hauk, R., Teuling, A. J., van Emmerik, T. H. M., and van der Ploeg, M.: What river plastic hotspots do not have in common, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-17676, https://doi.org/10.5194/egusphere-egu25-17676, 2025.

X2.31
|
EGU25-17811
Mita Uthaman, Laura Ermert, Angel Ling, Jonas Junker, Cinzia Ghisleni, and Anne Obermann

Grande Dixence, the tallest gravity dam in the world, is located in the Swiss Alps on the Dixence River with a catchment area of 4 km2 at a towering elevation of 2000m. The lake serves as a collecting point of melt water from 35 glaciers and reaches full capacity by late September, subsequently draining during winter and dropping to lowest levels in April. For a reservoir as large as the Grande Dixence, the variation in hydrological load can be expected to induce changes in crustal stress. The goal of this study was to harness the loading effect of the time-varying level of reservoir load as a source of known stress to investigate the variation in seismic velocity of the bedrock due to changes induced in crustal stress and strain rates. 22 seismic nodes were thus deployed along the banks of the reservoir which were operational from mid-August to mid-September, corresponding to the time period when the lake level reaches its maximum. Of the 22 nodes, 18 were deployed in closely spaced patches of six in order to carry out coherent stacking and to increase the signal-to-noise ratio, besides one group of three nodes and one single node. Measurement quality appears satisfactory: small local earthquakes are recorded well, and the probabilistic power spectral densities (PPSDs) computed for data quality validation evidence the ambient noise levels to be well within the global noise limits. However, the recorded noise is unexpectedly complex and, at periods shorter than 1 second, varies strongly by location. The 0.5--5s (0.2--2 Hz) period band at lakes generally records a diurnally varying noise level, often associated with lake generated microseism. Diurnal variations around 1 second of period are observed in our study as well. The amplitude of ambient noise level around 1 second of period is observed to be highest when the lake level changes, along with the prominent diurnal variation. A similar variation is observed in the seismic velocity variation (dv/v) computed from cross-correlated and auto-correlated ambient noise filtered between 0.5--1 Hz, with dv/v exhibiting a drop with rising lake level. These results provide preliminary evidence for possible change in crustal stress state with changing hydrological load. Future direction of this study consists of analytically modeling the results to quantify the influence of thermobarometric parameters on PPSDs and dv/v, and deconvolve it from the lake induced variations.

How to cite: Uthaman, M., Ermert, L., Ling, A., Junker, J., Ghisleni, C., and Obermann, A.: Temporal variation of ambient noise at the Grande Dixence reservoir recorded by a nodal deployment, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-17811, https://doi.org/10.5194/egusphere-egu25-17811, 2025.

Modelling
X2.32
|
EGU25-12720
|
ECS
Hans Segura, Cathy Hohenegger, Reiner Schnur, and Bjorn Stevens

Earth system models are important tools used to understand our climate system and project possible changes in our climate due to anthropogenic and natural forcings. Human errors can occur in the development of Earth System models, i.e., bugs, giving an unphysical representation of our climate. A way to identify and solve bugs is to apply physical concepts. Here, we present an experience that occurred in the development of the ICOsahedral Non-hydrostatic model (ICON) as a kilometer-scale Earth System model, in which physically understanding a bug in the surface energy budget fixed land precipitation. 

In a simulation of ICON, referred to as ICON-bug, precipitation over tropical land continuously decreased across the simulation. This led to a ratio of land-ocean precipitation in the tropics of less than 0.7, which, otherwise, should be more than 0.86. As part of the possible explanations, the surface energy budget over land was targeted as a culprit. This idea relies on the influence of the interaction between soil moisture, surface heat fluxes, and winds to generate circulation favoring precipitation over dry land surfaces (Hohenegger and Stevens 2018). Indeed, the surface energy budget over dry surfaces in the ICON-bug showed an error in sensible heat flux. The sensible heat flux transmitted to the atmosphere was 70% of what was calculated for the surface module. Fixing this error closed the surface energy budget and increased land precipitation over the tropics, leading to a ratio of land-ocean precipitation of 0.94, close to observations. 

How to cite: Segura, H., Hohenegger, C., Schnur, R., and Stevens, B.: Physical understanding of bugs to improve the representation of the climate system  , EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-12720, https://doi.org/10.5194/egusphere-egu25-12720, 2025.

X2.33
|
EGU25-18400
Luis Kornblueh

With the advent of parallel programming in the late 1990s. A port of the than available Max Planck Institutes for Meteorology spectral atmospheric model echam5 to MPI and OpenMP was done. For testing and validation of the hybrid parallelization a coherence algorithm was developed. The implementation has been incorporated into todays NWP and climate model ICON as well. The coherence algoritm consists of several stages: first one MPI rank is running the serial model against an n-task MPI parallelized model. During runtime the state vector is checked for binary-identity. If successfull a m-task MPI version can be compared to an m-task MPI version for high processor counts. The same schema can be used OpenMP parallelization. ONe MPI task runs the model serial using one OpenMP thread and a second MPI task runs k OpenMP threads. Again, the results are compared for binary-identity. As the testing needs to be done automatically, bit-identity is important for testing not necessarily for production.

The tesing revealed plenty of problems during the initial parallelization work of echam5 and showed constant appearing problems in the ICON development phase.

However, far in a couple of century long simulation the bit-identity was just by accident found to be broken: the search of the cause started!

How to cite: Kornblueh, L.: MPI and OpenMP coherence testing and vaildation: the hybris of testing non-deterministic model code, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-18400, https://doi.org/10.5194/egusphere-egu25-18400, 2025.

X2.34
|
EGU25-20057
Ross Woods

The science question: how can we use hydrological process knowledge to understand the timing and magnitude of seasonal streamflow in snow-influenced catchments.

What was known: in general, catchments with colder climates have later and larger seasonal streamflow peaks, because more snow tends to accumulate in colder catchments, and it melts later because the time when melt can occur is later in the year in colder climates. Numerical models with fine space and time resolution were able to resolve these phenomena, but there was no theory which directly linked long term climate to seasonal streamflow.

In 2009 I published a very simple deterministic theory of snow pack evolution. I tested it against snow observations at 6 locations in the western USA and it apparently worked well (although I later discovered that I'd been lucky).

In 2015 I used the snowmelt derived from this deterministic theory to predict timing and magnitude of seasonal streamflow. It did poorly, and revealed untested assumptions in my theory. I tried making the theory slightly more complicated by considering within-catchment variation in climate. This did not help.

In 2016 I created a stochastic version of the theory (a weakness identified in 2015), and then also considered the within-catchment variation in climate. It did better at reproducing measured snow storage, but did not help in understanding seasonal streamflow.

My next step will be to consider all forms of liquid water input, i.e. not just snowmelt but also rainfall.

What survived: I will continue to use the stochastic version of the theory as it is clearly an improvement. I will continue to examine whether within-catchment climate variability is important, but it seems unlikely after two negative results. But whether introducing liquid water input will be sufficient, who can say? I will also try to examine in more detail how it is that the finely-resolved numerical models can do an adequate job, but the theory cannot - it is in this gap that the answer probably lies.  However the models are very complicated, and it is not easy to get a good understanding of exactly what they are doing, even though we know which equations the are implementing.

 

How to cite: Woods, R.: Some Perfectly Reasonable Ideas that Didn’t Work: Snow Hydrology, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-20057, https://doi.org/10.5194/egusphere-egu25-20057, 2025.

X2.35
|
EGU25-9145
|
ECS
Felix Schaumann

When economists estimate the expected economic damages from current-day CO2 emissions, they usually calculate the social cost of carbon – that is, the aggregated damage caused by the emission of an additional ton of CO2. Several cost-benefit integrated assessment models (IAMs) are built to assess this quantity, and among them is the META model. This model is built specifically to assess the effects of tipping points on the social cost of carbon, and it usually operates stochastically. When integrating a deterministic, but small carbon cycle tipping point into the model, however, the social cost of carbon seems to explode: a few gigatons of additional emissions almost double the impact estimates of CO2 emissions! Well, maybe. In fact, these results are a pure artifact of two things: 1) the way in which social cost of carbon estimates are calculated with IAMs; and 2) the way that tipping points are implemented in the META model. And, of course, 3): a lack of initial thoughtfulness on behalf of myself. A thorough look into this issue shows that, as expected, a marginal change in emissions leads to a marginal change in damage estimates. While that result is rather boring, the previous blunder can actually be instructive about the scarcely-known methods used to obtain economic impact estimates of climate change.

How to cite: Schaumann, F.: Drastic increase in economic damages caused by a marginal increase in CO2 emissions?, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-9145, https://doi.org/10.5194/egusphere-egu25-9145, 2025.

X2.36
|
EGU25-19890
Paul Tackley

It is common to perform two-dimensional simulations of mantle convection in spherical geometry. These have commonly been performed in axisymmetric geometry, i.e. (r, theta) coordinates, but subsequently we (Hernlund and Tackley, PEPI 2008) proposed using (r, phi) spherical annulus geometry and demonstrated its usefulness for low-viscosity-contrast calculations. 

When performing scaling studies in this geometry, however, strange results that did not match what is expected from Cartesian-geometry calculations were obtained when high-viscosity features (such as slabs) were present. It turns out that this is because the geometrical restriction forces deformation that is not present in 3 dimensions. Specifically, in a 2-D spherical approximation, a downwelling is forced to contract in the plane-perpendicular direction, requiring it to extend in the two in-plane directions. In other words, it is "squeezed" in the plane-perpendicular direction.  If the downwelling has a high viscosity, as a cold slab does, then it resists this forced deformation, sinking much more slowly than in three dimensions, in which it could sink with no deformation. This can cause unrealistic behaviour and scaling relationships for high viscosity contrasts. 

This problem can be solved by subtracting the geometrically-forced deformation ("squeezing") from the strain-rate tensor when calculating the stress tensor. Specifically, components of in-plane and plane-normal strain rate that are required by and proportional to the vertical (radial) velocity are subtracted, a procedure that is here termed "anti-squeeze". It is demonstrated here that this "anti-squeeze" correction results in sinking rates and scaling relationships that are similar to those in 3-D geometry whereas without it, abnormal and physically unrealistic results can be obtained for high viscosity contrasts. This correction has been used for 2-D geometries in the code StagYY (Tackley, PEPI 2008; Hernlund and Tackley, PEPI 2008) since 2010.

How to cite: Tackley, P.:  Adventures in Modelling Mantle Convection in a Two-Dimensional Spherical Annulus and Discovering the Need for "Anti-Squeeze”, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-19890, https://doi.org/10.5194/egusphere-egu25-19890, 2025.

X2.37
|
EGU25-15059
Anne Davaille

Whenever you study a phenomenon of mm to a few cm-scale in the laboratory which involves an interface, the question of surface tension arises. Surface tension is due to the fact that molecules prefer to stay with their own kind. Therefore, the creation of an interface between two fluids requires energy, and this influences the dynamics around the interface.

Surface tension can be a blessing: it produces the round shape of rain drops or the nice bubble shapes of colorful liquid in a lava lamp. It allows objects with a higher density to float on a liquid (such as an insect on water, or a silicone plate on sugar syrup). It can generate flow up a capillary.

However, it can also be a curse in the case of thermal convection. Purely thermal convection  develops when a plane layer of fluid is heated from below and cooled from above. The engine of motion is the thermal buoyancy of the fluid. This is what is happening in a planetary mantle on scales of hundreds to thousands kilometers. This is also what is happening in a closed box in the laboratory. But as soon as an interface exists, either between an upper and a lower experimental mantle, or in the case of a free surface at the top of the fluid layer, surface tension effects can become important. For exemple, the variation of surface tension with temperature was responsible for the beautiful honey-comb patterns imaged by Benard (1901) in the first systematic study of thermal convection with a free-surface. Surface tension is also going to act against the initiation of subduction (which acts to break the surface). 

We shall review in this presentation the signatures of surface tension in a convective context, and the different ways to minimize and/or remove the effects of surface tension in convection experiments, such as using miscible liquids, or a layer of experimental « sticky air ».

How to cite: Davaille, A.: Analog studies of mantle convection: the curse of surface tension (or not) ?, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-15059, https://doi.org/10.5194/egusphere-egu25-15059, 2025.

X2.38
|
EGU25-15826
|
ECS
Benjamin Poschlod, Lukas Brunner, Benjamin Blanz, and Lukas Kluft

The emergence of global km-scale climate models allows us to study Earth's climate and its changes with unprecedented local detail. However, this step change in spatial resolution to grid spacings of 10 km or less also brings new challenges to the numerical methods used in the models, the storage of model output, and the processing of the output data into actionable climate information. The latest versions of the ICON-Sapphire model developed in the frame of the NextGEMS project address these challenges by running on an icosahedral grid while outputting data on the so-called HEALPix grid. Both grids are unstructured grids, which avoids, for example, the issue of longitude convergence. In addition, HEALPix allows data to be stored in a hierarchy of resolutions at different discrete zoom levels, making it easier for users to handle the data.  

The transition from the native 10 km grid to the output grid is made by a simple but very fast nearest-neighbour remapping. An advantage of this simple remapping approach is that the output fields are not distorted, i.e. the atmospheric states in the output remain self-consistent. As HEALPix only provides discrete zoom levels in the setup of the run, it was decided to remap to the closest available resolution of 12 km rather than to the next finer resolution of 6 km. This decision was made to avoid artificially increasing the number of grid points and to avoid creating duplicates through the nearest neighbour remapping.

As a consequence of this approach, wave-like patterns can emerge due to the Moiré effect that can result from the interaction of two grids. We find these patterns when looking at certain derived precipitation extremes, such as the annual maximum daily precipitation, the 10-year return level of hourly precipitation, or the frequency of dry days. At first, we interpreted these patterns as a plotting issue, as the figures might have too low resolution to cope with the high-resolution global plot (aliasing) leading to a Moiré pattern.

However, zooming in on the affected regions and closer examination of the data revealed that the pattern is in fact in the data. Further investigation with synthetic data confirmed the suspicion that the Moiré pattern was indeed caused by the remapping of the native 10 km icosahedral grid to the slightly coarser 12 km HEALPix grid. We hypothesise that precipitation is particularly affected by this issue, as it typically contains many grid cells with zero precipitation, with local clusters of non-zero values at the 15-minutely output interval. Yet, we cannot exclude the possibility that other variables are also affected.

As a consequence, if remapping is required, it is recommended to first remap from the native resolution to a finer resolution grid. As a next step, the conservative nature of the HEALPix hierarchy can be used to compute the coarser level. In this way it is likely to be possible to avoid aliasing and still keep the amount of output data the same.

How to cite: Poschlod, B., Brunner, L., Blanz, B., and Kluft, L.: Output regridding can lead to Moiré pattern in km-scale global climate model data from ICON, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-15826, https://doi.org/10.5194/egusphere-egu25-15826, 2025.