ITS1.8/TS9.1 | Navigating the Digital Earth: Advancements in Earth Systems and Geophysical Inference
EDI
Navigating the Digital Earth: Advancements in Earth Systems and Geophysical Inference
GSAus and GPCN
Convener: Sabin ZahirovicECSECS | Co-conveners: Nicola Piana Agostinetti, Christian Vérard, Xin ZhangECSECS, Wen DuECSECS, Haipeng Li
Orals
| Mon, 15 Apr, 08:30–12:30 (CEST)
 
Room 2.24
Posters on site
| Attendance Mon, 15 Apr, 16:15–18:00 (CEST) | Display Mon, 15 Apr, 14:00–18:00
 
Hall X2
Posters virtual
| Attendance Mon, 15 Apr, 14:00–15:45 (CEST) | Display Mon, 15 Apr, 08:30–18:00
 
vHall X2
Orals |
Mon, 08:30
Mon, 16:15
Mon, 14:00
Digital twins of our planet, at present-day and over geological timescales, are becoming central to decision-making and de-risking for a broad range of applications from natural hazard risk assessments, climate modelling, and to resource analysis. Emerging modelling techniques are promising to value-add to complex, and sometimes obscure, geological and geophysical data through machine learning, artificial intelligence, and other advanced statistical and nonlinear optimisation techniques. In addition, these new techniques provide an avenue to increase the quantifiability of geological processes at a wide range of spatial and temporal scales. This includes the key requirement to incorporate better quantifications of uncertainty in both parameter values and model choice, as well as the fusion between geophysical sensing and geological constraints with numerical modelling of Earth Systems.

We invite submissions from all disciplines that aim to model or constrain one or more Earth Systems over modern and geological timeframes. We welcome submissions that are analytical or lab-focused, field-based, or involve numerical modelling. This session also aims to explore cutting-edge methods, tools, and approaches that push the boundaries of geophysical inference and uncertainty analysis, and geological data fusion. We ask the question `Where to next?’ in our collective quest to develop digital twins of our planet.

The session will also celebrate the contributions of early career researchers, open/community philosophy of research, and innovations that have adopted inter-disciplinary approaches.

Orals: Mon, 15 Apr | Room 2.24

Chairpersons: Sabin Zahirovic, Christian Vérard, Wen Du
08:30–08:40
08:40–08:50
|
EGU24-21753
|
ECS
|
On-site presentation
Harikrishnan Nalinakumar and Stuart Raymond Clark

This study explores the geological complexity of the South Nicholson Region, an area spanning the Northern Territory and Queensland in Australia, from the newly drilled NDI Carrara 1 well, thus exposing the burial history of the Carrara sub-basin. Formed before the formation of the Nuna supercontinent, this region is positioned near resource-abundant basins and boasts a complex geological history. It has undergone significant tectonic shifts, orogenic activities, and the development of sedimentary basins over 1.6 billion of years while the world was developing as we see it in present. Despite its potential for mineral and petroleum resources, the South Nicholson Region was previously under-explored, lacking in-depth seismic, well, and geophysical data. Recently acquired data from the region includes five seismic lines and a new well, offering invaluable insights into the region's subsurface geology, including the identification of a new sub-basin, the Carrara Sub-basin. Characterised by a gravity low on its southeast side, the Carrara Sub-basin encompasses thick sequences of Proterozoic rocks from the Northern Lawn Hill Platform, Mount Isa Province and McArthur Basin. The primary objective of this study is to examine the burial history, tectonic subsidence and paleo-reconstruction of the South Nicholson region.

Our results indicate that the South Nicholson Region has undergone multiple cycles of sedimentation, tectonic uplift and erosion. Between ~1640 Ma and 1580 Ma, the region experienced increasing deposition rates. The presence of an unconformity obscures the sedimentation and tectonic history from 1600 to 500 Ma. However, by 500 Ma, significant subsidence had occurred, indicating that subsidence was the predominant geological force during this period. After this interval, an uplift event is evident, exhuming the layers until 400 Ma. From 400 Ma until today, little to no subsidence has been briefly interrupted by minor uplift events. Our calculated tectonic subsidence curve closely aligns with the regional deposition patterns, highlighting the intricate relationship between sediment deposition and tectonic activities, thereby providing valuable insights into the interplay between sedimentary and tectonic processes in the region.

How to cite: Nalinakumar, H. and Clark, S. R.: Basin evolution and Paleo reconstruction of the Mesoproterozoic South Nicholson Region, NE Australia, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-21753, https://doi.org/10.5194/egusphere-egu24-21753, 2024.

08:50–09:00
|
EGU24-14517
|
Virtual presentation
N. Ryan McKenzie, Hangyu Liu, Cody Colleps, and Adam Nordsvan

Tectonic processes influence numerous biogeochemical cycles. Accordingly, the evolution of the continental curst and changes in tectonic styles are inherently linked with secular chages in Earth’s surface environment. Here we present multiproxy mineralogical and geochronologic data to evaluate compositional changes in the upper crust along with variations in tectonic regimes and crustal recycling.  Our data indicate transitions from dominantly mafic to volumetrically extensive felsic upper crust occurred from the Archean into the Paleoproterozoic, which corresponds with evidence for enhanced crustal reworking. That later Paleoproterozoic through the Mesoproterozoic is characterized by a general reduction in crustal recycling and assimilatory tectonics with relatively limited active crustal thickening. Finally, the Neoproterozoic–Phanerozoic represents an interval with of increased juvenile magmatism and extensional tectonics, corresponding with deep and steep subduction and slab-rollback. This leads to enhanced island arc and back-arc basin formation, and subsequent arc collision.  These major shifts in composition and tectonic regimes that broadly bookended the Proterozoic have profound effects on numerous biogeochemical cycles particularly carbon, oxygen, and phosphorous cycles, and are thus likely linked to changes in the oxidative state and climate of Earth’s surface system observed during these times.

How to cite: McKenzie, N. R., Liu, H., Colleps, C., and Nordsvan, A.: Multiproxy investigation of secular changes in tectonic regimes and crustal recycling in Earth history, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-14517, https://doi.org/10.5194/egusphere-egu24-14517, 2024.

09:00–09:10
|
EGU24-2473
|
On-site presentation
|
James Ogg, Wen Du, Aditya Sivathanu, Sabrina Chang, Suyash Mishra, Sabin Zahirovic, Aaron Ault, O'Neil Mamallapalli, Haipeng Li, Mingcai Hou, and Gabriele Ogg

Building paleogeographic maps that are projected onto different tectonic plate reconstruction models requires team efforts to compile extensive interlinked databases of regional sedimentary and volcanic facies, data sharing standards, and computer projection methods. Two goals of the Deep-Time Digital Earth (DDE) program of the International Union of Geological Sciences (IUGS) Paleogeography Working Group are: (1) to interconnect online national databases for all geologic formations, and to compile these online "lexicons" for countries that currently lack these; (2) to project the combined paleogeographic output of these distributed databases for any time interval onto appropriate plate tectonic reconstructions.

Therefore, we have worked with regional experts to compile and interlink cloud-based lexicons for different regions of the world that are enhanced by graphic user-interfaces. Online lexicons with map-based and stratigraphic-column navigation are currently completed for the Indian Plate (ca. 800 formations), China (ca. 3200), Vietnam-Thailand-Malaysia (ca. 600), and all major basins in Africa (ca. 700) and in the Middle East (ca. 700 formations). These will soon be joined by Japan (ca. 600 formations) and basins in South America (ca. 700 formations). A multi-database search system (age, region, lithology keywords, etc.) enables all returned entries be displayed by-age or in alphabetical order. The genera in the "fossil" field are auto-linked to their entries and images in the online Treatise of Invertebrate Paleontology. With a single click, a user can plot the original extent of the geologic formation (or an array of regional formations of a specified age) onto different plate reconstruction models with the polygon(s) filled with the appropriate lithologic facies pattern(s). Our team collaborated with the Macrostrat team at Univ. Wisconsin (Madison) to interlink with their extensive regional facies-time compilations for North America and the ocean basins to enable a near-global coverage. Following the lead of Macrostrat's ROCKD app, this project is in partnership with UNESCO's Commission for the Geologic Map of the World and other geological surveys to enable linking online geologic map units for direct access to the lexicon details on that geologic formation and its former paleogeographic setting. Essentially, goal is to create a view of the sediments and volcanics that were accumulating onto the Earth's surface at any time in the past.

The main website (https://geolex.org) has links to the growing array of regional lexicons.

How to cite: Ogg, J., Du, W., Sivathanu, A., Chang, S., Mishra, S., Zahirovic, S., Ault, A., Mamallapalli, O., Li, H., Hou, M., and Ogg, G.: Online databases of the geologic formations of Asia and Africa with display onto plate reconstructions, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-2473, https://doi.org/10.5194/egusphere-egu24-2473, 2024.

09:10–09:20
|
EGU24-6431
|
ECS
|
On-site presentation
Geoscience Knowledge Understanding and Utilization via Data-centric Large Language Model
(withdrawn)
Cheng Deng, Le Zhou, Yi Xu, Tianhang Zhang, Zhouhan Lin, Xinbing Wang, and Chenghu Zhou
09:20–09:30
|
EGU24-15087
|
ECS
|
On-site presentation
Jovid Aminov, Guillaume Dupont-Nivet, Nozigul Tirandozova, Fernando Poblete, Ibragim Rakhimjanov, Loiq Amonbekov, and Ruslan Rikamov

Paleogeographic maps illustrate the distribution of land and sea, as well as the topography of the Earth’s surface during different geological periods based on the compilation of a wide range of geological and geophysical datasets. These maps provide boundary conditions for various models of the Earth’s systems, including climate, mantle convection, and land surface evolution. A number of software programs and computer algorithms have been developed in the past three decades to reconstruct either the past positions of continents and oceans or their elevation and depth. We recently developed the open-source and user friendly "Terra Antiqua", allowing users to create digital paleogeographic maps in a Geographic Information System (GIS) environment (QGIS), using various tools that are easy to operate in combination with Gplates, a widely used software for plate tectonic reconstructions. The next step is to develop a comprehensive and integrated solution easily accessible on the web that can automate most of the steps involved in reconstructing past plate configurations and topography. We present here a web application ("Terra Antiqua online") that we are developing for the creation of digital paleogeographic maps. The web application has two parts: (1) The front-end uses CesiumJS, an open-source JavaScript library for making 3D globes and maps, to visualize the databases and let the users interact with it.  (2) The back-end uses Python algorithms and libraries such as GDAL and pyGPlates to process the data and perform tectonic and hypsometric reconstructions.  Terra Antiqua online uses pyGplates API to access existing tectonic models and apply them to rotate plate positions and datasets to their past position. New developments are allowing it to estimate the elevation, depth and distribution of the land and sea by automatically processing various geological proxy data (e.g. paleofacies maps, paleo-elevation proxies, fossils databases etc…) according to physically based algorithms. The project further aims to incorporate web-based landscape modeling tools and develop a community around a geological database and paleogeographic reconstruction methods and standards.

How to cite: Aminov, J., Dupont-Nivet, G., Tirandozova, N., Poblete, F., Rakhimjanov, I., Amonbekov, L., and Rikamov, R.: A web-based and data-driven approach to paleogeographic reconstructions, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-15087, https://doi.org/10.5194/egusphere-egu24-15087, 2024.

09:30–09:40
|
EGU24-7977
|
On-site presentation
Florian Franziskakis, Christian Vérard, and Gregory Giuliani

The Panalesis model (Vérard, 2019) was developed in a preliminary version according to concepts, methods and tools that follow the work carried out for more than 20 years at the University of Lausanne (Stampfli & Borel, 2002; Hochard, 2008). Although the techniques are relevant, development under ArcGIS® does not allow visibility and easy accessibility of the model to the scientific community.

A major effort is therefore underway to migrate the entire model to an open source version using a FAIR approach for research software (Chue Hong et al., 2021). This migration concerns both the plate tectonic maps covering all the world over the entire Phanerozoic and part of the Neoproterozoic, but also the creation of paleoDEMs (global quantified topographies).

The Panalesis model and its entire architecture is therefore currently migrated to QGIS (a free and open source geographic information system). TopographyMaker, the software designed to convert polylines from the reconstruction map into a points grid with elevation values is now working as a plugin on QGIS. The output palaeoDEMS will also be published according to the FAIR principles for scientific data management and stewardship (Wilkinson et al., 2016).

The development and future refinements of TopographyMaker will enhance the Earth system modelling, especially coupling between models of different shells of the Earth such as atmospheric circulation, climatic evolution, and mantle dynamics. The topography is, for instance, considered a first order controlling factor for CO2 evolution over geological timescales, through silicate weathering (MacDonald et al., 2019).

How to cite: Franziskakis, F., Vérard, C., and Giuliani, G.: Reconstructing the Earth in Deep-Time: A New and Open Framework for the PANALESIS Model, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-7977, https://doi.org/10.5194/egusphere-egu24-7977, 2024.

09:40–10:10
|
EGU24-18486
|
ECS
|
solicited
|
Highlight
|
On-site presentation
Sebastian Steinig, Helen Johnson, Stuart Robinson, Paul J. Valdes, and Daniel J. Lunt

Earth’s climate shows a remarkable variability on geological timescales, ranging from widespread glaciation to ice-free greenhouse conditions over the course of the Phanerozoic, i.e. the last 540 million years. Earth system modelling allows us to better understand and constrain the drivers of these changes and provides valuable reference data for other paleoclimate disciplines (e.g., chemistry, geology, hydrology). However, the sheer volume and complexity of these datasets often prevents direct access and use by non-modellers, limiting their benefits for large parts of our community.

We present the online platform “climatearchive.org” to break down these barriers and provide intuitive access to paleoclimate data for everyone. More than 100 global coupled climate model simulations covering the entire Phanerozoic at the stage level build the backbone of the web application. Key climate variables (e.g. temperature, precipitation, vegetation and circulation) are displayed on a virtual globe in an intuitive three-dimensional environment and on a continuous time axis throughout the Phanerozoic. The software runs in any web browser — including smartphones — and promotes visual data exploration, streamlines model-data comparisons, and supports public outreach efforts. We discuss the current proof of concept and outline the future integration of new sources of model and geochemical proxy data to streamline and advance interdisciplinary paleoclimate research.

We also present ongoing efforts for an integrated model-data synthesis to quantify changes in meridional and zonal temperature gradients throughout the Phanerozoic and to address the relative roles of individual forcings (greenhouse gases, solar, geography). While substantial effort has been made to quantify the evolution of global mean temperatures over the last 540 million years, changes in the large-scale temperature gradients and their causes are comparably less constrained. As a fundamental property of the climate system, changes in the spatial patterns of surface temperature play a critical role in controlling large-scale atmospheric and ocean circulation and influence hydrological, ecological, and land surface processes. The resulting best estimate product of meridional and zonal temperature gradients over the last 540 million years will represent a step change in our understanding of the drivers and consequences of past temperature gradient changes and will provide the community with a valuable resource for future climatological, geological, and ecological research.

How to cite: Steinig, S., Johnson, H., Robinson, S., Valdes, P. J., and Lunt, D. J.: Towards a community platform for paleoclimate data and temperature gradients over the last 540 million years , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-18486, https://doi.org/10.5194/egusphere-egu24-18486, 2024.

10:10–10:15
Coffee break
Chairperson: Nicola Piana Agostinetti
10:45–10:50
10:50–11:00
|
EGU24-20508
|
On-site presentation
Florian Wellmann, Miguel de la Varga, and Zhouji Liang

Geological models can be constructed with a variety of mathematical methods. Generally, we can describe the modeling process in a formal way as a functional relationship between input parameters (geological observations, orientations, interpolation parameters) and an output in space (lithology, stratigraphy, rock property, etc.). However, in order to obtain a suitable implementation in geophysical inverse frameworks, we have to consider specific requirements. In recent years, a substantial amount of work focused on low-dimensional parameterizations and efficient automation of geological modeling methods, as well as their combination with suitable geophysical forward simulations. In this contribution, we focus on differential geomodelling approaches, which allow for an integration of geological modeling methods into gradient-based inverse approaches.

In this work, we emphasize differential geomodelling approaches. These approaches seamlessly integrate geological modeling methods into gradient-based inverse approaches. To achieve this integration, we actively employ modern machine learning frameworks, specifically TensorFlow and PyTorch. We then incorporate these geometric geological modeling methods into a Stein Variational Gradient Descent (SVGD) algorithm. SVGD is adept at addressing the challenges of multimodality in probabilistic inversion. Moreover, we demonstrate the implementation of these methods in a Hamiltonian Monte Carlo approach.

Our results are promising, showing that treating geological modeling as a differentiable approach unlocks new possibilities. This method facilitates novel applications in the integration of geological modeling with geophysical inversion, paving the way for advanced research in this field.

How to cite: Wellmann, F., de la Varga, M., and Liang, Z.: Differentiable Geomodeling: towards a tighter implementation of structural geological models into geophysical inverse frameworks, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-20508, https://doi.org/10.5194/egusphere-egu24-20508, 2024.

11:00–11:10
|
EGU24-15712
|
ECS
|
On-site presentation
Christin Bobe, Jan von Harten, Nils Chudalla, and Florian Wellmann

The interface between different rock units is usually described as a sharp boundary in geological models. Such geological interfaces are often a main target of geological as well as geophysical investigations. In the inverse images derived from electrical resistivity tomography (ERT), geological interfaces are typically represented by a continuous, smooth change in the electrical resistivity. This smoothing of interfaces is often unwanted since it deviates significantly from typical geological features where the exact location of the interface can be precisely determined.

The proposed GeoBUS workflow (Geological modeling by Bayesian Updating of Scalar fields) aims to generate probabilistic geological models which include the information from probabilistic ERT inversion results using Bayesian updates. The GeoBUS workflow consists of three main steps. The method Kalman ensemble generator (KEG), a numerical implementation for computing Bayesian updates, plays an important role in this workflow.

In the first step of the GeoBUS workflow, the KEG is used for inversion of ERT data. The KEG generates probabilistic, yet smooth images of the subsurface in terms of electrical resistivity.

In the second step of the GeoBUS workflow, we perform implicit geological modeling of the subsurface creating an ensemble of scalar fields. For the geological modeling, we use point information, i.e. the location and orientation of present geological units, along with the uncertainty associated to both location and orientation. The resulting ensemble consists of scalar fields that are defined everywhere in space and build the basis of the geological model. Drawing contours into each scalar field for the scalar field values for which geological interfaces are confirmed, we create an ensemble of geological models.

For the third and final step of the GeoBUS workflow, we adopt the subsurface discretization used for the ERT inverse modeling and use the ensemble of geological models from step two to assign a probabilistic scalar field value to each cell of the discretized subsurface. This discrete version of the scalar field is used as the prior for a second KEG application. Based on literature values, we assign a probability density function for electrical resistivity values to each geological unit of the geological model to formulate a corresponding likelihood. Using the KEG, we derive a Bayesian update of the discretized scalar field combining the petrophysical likelihood and the information from the ERT inversion. This results in a posterior scalar field which again can be used to generate an ensemble of geological models that now includes the information from the geophysical measurements.

We demonstrate this novel workflow for simple and synthetic two-dimensional subsurface models, generating both synthetic geological and geophysical data. This way we aim to (1) create simple benchmark examples, and (2) give a first evaluation of the performance of the GeoBUS workflow. 

How to cite: Bobe, C., von Harten, J., Chudalla, N., and Wellmann, F.: GeoBUS - A Probabilistic Workflow Combining ERT Inverse Modeling and Implicit Geological Modeling , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-15712, https://doi.org/10.5194/egusphere-egu24-15712, 2024.

11:10–11:20
|
EGU24-6992
|
ECS
|
On-site presentation
Fabrizio Magrini, Jiawen He, and Malcolm Sambridge

The Earth's interior structure must be inferred from geophysical observations collected at the surface. Compared to just a few decades ago, the amount of geophysical data available today is voluminous and growing exponentially. Dense seismic networks like USArray, AlpArray, and AusArray now enable joint inversions of various geophysical data types to maximise subsurface resolution at scales ranging from local to continental. However, the practical application of joint inversions faces several challenges:

  • Various geophysical techniques typically probe different scales and depths, complicating the choice of an appropriate discretisation for the Earth's interior.
  • Different geophysical observables may respond to physical properties that are not directly related (e.g., density and electrical conductivity), making the construction of self-consistent parameterisations a non-trivial task.
  • Without a comprehensive understanding of noise characteristics, standard methods require assigning weights to different data sets, yet robust choices remain elusive.

Capable of overcoming these recognised challenges and allowing estimates of model uncertainty, probabilistic inversions have grown in popularity in the geosciences over the last few decades, and have been successfully applied to specific modelling problems. Here, we present BayesBridge, a user-friendly Python package for generalised transdimensional and hierarchical Bayesian inference. Computationally optimised through Cython, our software offers multi-processing capabilities and runs smoothly on both standard computers and computer clusters. As opposed to existing software libraries, BayesBridge provides high-level functionalities to define complex parameterisations, with prior probabilities (defined by uniform, Gaussian, or custom density functions) that may or may not be dependent on depth and/or geographic coordinates. By default, BayesBridge employs reversible-jump Markov chain Monte Carlo for sampling the posterior probability, with the option of parallel tempering, but its low-level features enable effortless implementations of arbitrary sampling criteria. Utilising object-oriented programming principles, BayesBridge ensures that each component of the inversion -- such as the discretisation, the physical properties to be inferred, and the data noise -- is a self-contained unit. This design facilitates the seamless integration of various forward solvers and data sets, promoting the use of multiple data types in geophysical inversions.

How to cite: Magrini, F., He, J., and Sambridge, M.: Streamlining Multi-Data Geophysical Inference with BayesBridge, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-6992, https://doi.org/10.5194/egusphere-egu24-6992, 2024.

11:20–11:30
|
EGU24-16036
|
On-site presentation
Malcolm Sambridge, Andrew Valentine, and Juerg Hauser

Over the past twenty years, Trans-dimensional Bayesian Inference has become a popular approach for Bayesian sampling. It has been applied widely in the geosciences when the best class of model representation, e.g. of the subsurface, is not obvious in advance, or the number of free variables undecided. Making arbitrary choices in these areas may result in sub-optimal inferences from data. In trans-D, one typically defines a finite number of model states, with differing numbers of unknowns, over which Bayesian Inference is to be performed using the data.

A key attraction of Trans-D Bayesian Inference is that it is designed to let the data decide which state, as well as which configurations of parameters within each state, are preferred by the data, in a probabilistic manner. Trans-D algorithms may hence be viewed as a combination of fixed dimensional within-state sampling and simultaneous between-state sampling where Markov chains visit each state in proportion to their support from the data.

In theory, each state may be completely independent, involving different classes of model parameterization, with different numbers of unknowns, data noise levels, and even different assumptions about the data-model relationship. Practical considerations, such as convergence of the finite length Markov chains between states, usually mean that each state must be closely related to each other, e.g. differing by a single layer in a 1-D seismic Earth model. In addition, since the form of the necessary Metropolis-Hastings balance condition depends on the mathematical relationship between the unknowns in each state, then implementations are often bespoke to each class of model parameterization and data type. To our knowledge there exists no automatic trans-D sampler where one can define arbitrary independent states, together with a prior and Likelihood, and simply pass to a generalised sampling algorithm, as is common with many fixed dimensional MCMC algorithms and software packages. 

A second limitation in trans-D sampling is that since implementations are bespoke within a class of model parameterizations, within-state sampling is typically performed with simplistic and often dated algorithms, e.g. Metropolis-Hastings or Gibbs samplers, thereby limiting convergence rates. Over the past 30 years fixed dimensional sampling has advanced considerably with numerous efficient algorithms available and many conveniently translated into user friendly software packages, almost all of which have not been used within a trans-D framework due to a lack of a way to conveniently deploy them in a trans-D setting.

In this presentation we will address all of these issues by describing the theory under-pinning an ‘Independent State’ (IS) Trans-D sampler, together with some illustrative examples. In this algorithm class, sampling may be performed across states that are completely independent, containing arbitrary numbers of unknowns and parameter classes. In addition, the IS-sampler can conveniently take advantage of any fixed dimensional sampler without the need to derive and re-code bespoke Markov chain balance conditions, or specify mechanisms for transitions between model parameters within different states. In this sense it represents a general purpose automatic trans-D sampler.

How to cite: Sambridge, M., Valentine, A., and Hauser, J.: An Independent State sampler for Trans-dimensional Bayesian Inference, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-16036, https://doi.org/10.5194/egusphere-egu24-16036, 2024.

11:30–11:40
|
EGU24-8215
|
ECS
|
On-site presentation
Xuebin Zhao and Andrew Curtis

Full waveform inversion (FWI) has become a commonly used tool to obtain high resolution subsurface images from seismic waveform data. Typically, FWI is solved using a local optimisation method which finds one model that best fits observed data. Due to the high non-linearity and non-uniqueness of FWI problems, finding globally best-fitting solutions is not necessarily desirable since they fit noise in the data, and quantifying uncertainties in the solution is challenging. In principle, Bayesian FWI calculates a posterior probability distribution function (pdf), which describes all possible model solutions and their probabilities. However, characterising the posterior pdf by sampling alone is often intractably expensive due to the high dimensionality of FWI problems and the computational expense of their forward functions. Alternatively, variational inference solves Bayesian FWI problems efficiently by minimising the difference between a predefined (variational) family of distributions and the true posterior distribution, requiring optimisation rather than random sampling. We propose a new variational methodology called physically structured variational inference (PSVI), in which a physics-based structure is imposed on the variational family. In a simple example motivated by prior information from past FWI solutions, we include parameter correlations between pairs of spatial locations within a dominant wavelength of each other, and set other correlations to zero. This makes the method far more efficient in terms of both memory requirements and computational cost. We demonstrate the proposed method with a 2D acoustic FWI scenario, and compare the results with those obtained using three other variational methods. This verifies that the method can produce accurate statistical information of the posterior distribution with significantly improved efficiency.

How to cite: Zhao, X. and Curtis, A.: Physically Structured Variational Inference for Bayesian Full Waveform Inversion, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-8215, https://doi.org/10.5194/egusphere-egu24-8215, 2024.

11:40–11:50
|
EGU24-17157
|
ECS
|
On-site presentation
Odysseas Vlachopoulos, Niklas Luther, Andrej Ceglar, Andrea Toreti, and Elena Xoplaki

It is common knowledge that climate variability and change have a profound impact on crop production. From the principle that “it is green and it grows” to the assessment of the actual impacts of major weather drivers and their extremes on crop growth through the adoption of agro-management strategies informed by tailored and effective climate services, there is a well documented scientific and operational gap. This work focuses on the development, implementation and testing of an AI-based methodology that aims to reproduce a crop growth model informing on grain maize yield in the European domain. A surrogate AI model based on Bayesian deep learning and inference is compared for its efficiency against the process-based deterministic ECroPS model developed by the Joint Research Centre of the European Commission. The rationale behind this effort is that such mechanistic crop models rely on multiple input meteorological variables and are relatively costly in terms of computing resources and time, crucial aspects for a scalable and widely adopted solution. Such approaches make it possible to run very large ensembles of simulations based, for instance, on ensembles of climate predictions and projections and/or a perturbed parametrization (e.g. on the atmospheric CO2 concentration effects). Our surrogate crop model relies on three weather input variables: daily minimum and maximum temperatures and daily precipitation, where the training was performed with the ECMWF-ERA5 reanalysis. 

How to cite: Vlachopoulos, O., Luther, N., Ceglar, A., Toreti, A., and Xoplaki, E.: Impact modeling with Bayesian inference for crop yield assessment and prediction, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-17157, https://doi.org/10.5194/egusphere-egu24-17157, 2024.

11:50–12:00
|
EGU24-8842
|
On-site presentation
Probabilistic inversion with HMCLab: solving the eikonal and wave propagation problems
(withdrawn)
Andrea Zunino, Giacomo Aloisi, Scott Keating, and Andreas Fichtner
12:00–12:10
|
EGU24-15385
|
ECS
|
On-site presentation
Kai Nierula, Dmitriy Shutin, Ban-Sok Shin, Heiner Igel, Sabrina Keil, Felix Bernauer, Philipp Reiss, Rok Sesko, and Fabian Lindner

This research introduces a novel approach to seismic exploration on the Moon and Mars, employing autonomous robotic swarms equipped with seismic sensing and processing hardware. By relying on probabilistic inference methods, we aim to survey large surface areas to both autonomously identify and map subsurface features such as lava tubes and ice deposits. These are crucial for future human habitats and potential in-situ resource utilization.

This endeavor presents unique challenges due to the communication limitations and uncertainties inherent in remote, autonomous operations. To address these challenges, we adopt a distributed approach with robotic swarms, where each rover processes seismic data and shares the results with other rovers in its vicinity, contending with imperfect communication links. Thus, the swarm is used as a distributed computing network. The decisions made within the network are based on probabilistic modeling of the underlying seismic inference problem. A key innovation in this respect is the use of factor graphs to integrate uncertainties and manage inter-rover communications. This framework enables each rover to generate a localized subsurface map and autonomously decide on strategic changes in the seismic network topology, either exploring new areas or repositioning to enhance measurement accuracy of targeted underground regions.

The vision is to implement this approach on a distributed factor graph, allowing for a coordinated, probabilistic analysis of seismic data across the swarm. This strategy represents a significant departure from traditional static seismic sensor arrays, offering a dynamic and adaptable solution for planetary exploration. The first step towards realizing this vision involves implementing a Kalman filter for the one-dimensional linear heterogeneous wave equation. This has been achieved by reformulating finite difference schemes for wave propagation simulation into a state-space description. The resulting linear continuous n-th order system can be explicitly solved and rewritten into a discrete state space model that can be used in the standard Kalman filter recursion. However, the standard Kalman filter is limited due to its assumption that both model and process noise are Gaussian. With factor graphs, this limitation can be overcome, enabling a more robust and versatile analysis. Several simulation results will be shown to demonstrate the performance of these approaches.

We intend to extend the approach to higher-dimensional problems, implementing distributed versions of the Kalman filter and factor graph with simulated, non-perfect communication links. Eventually, the seismic inverse problems will be solved in these frameworks. Successfully achieving these objectives could greatly enhance our capabilities in extraterrestrial exploration, paving the way for more informed and efficient future space missions.

How to cite: Nierula, K., Shutin, D., Shin, B.-S., Igel, H., Keil, S., Bernauer, F., Reiss, P., Sesko, R., and Lindner, F.: Probabilistic Approach toward Seismic Exploration with Autonomous Robotic Swarms, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-15385, https://doi.org/10.5194/egusphere-egu24-15385, 2024.

12:10–12:20
|
EGU24-6149
|
ECS
|
Virtual presentation
Álvaro González

Earthquakes occur in a depth range where the physical conditions allow rocks to behave as brittle and to deform in a stick-slip fashion. This range is limited by the so-called upper and lower seismogenic depths, which are input parameters for bounding seismogenic ruptures in models of seismic hazard assessment.

Usually, such limits are estimated from the observed depth distribution of hypocenters. An exact estimation is not possible, because earthquake locations (and particularly hypocentral depths) are uncertain. Also, the sample of observed earthquakes is finite, and shallower or deeper earthquakes than those so far observed at a site could eventually happen. For these reasons, the extreme values of the distribution (the shallowest and the deepest earthquakes in the sample) are weak estimators, especially if a small sample (with few earthquakes) is used.

A common, more statistically robust, proxy to those limits is a given percentile of the distribution of earthquake depths. For example, the 90%, 95% or 99% percentiles (named D90, D95 or D99, respectively) are frequently used as proxies to the lower seismogenic depth. But the actual uncertainties of such estimates are, so far, not properly assessed.

Here I present a method for calculating such percentiles with an unbiased estimator and quantifying their uncertainties in detail.

Earthquakes are more easily missed (more difficult to detect) the deeper they are. So earthquake catalogues preferentially contain shallow events. To avoid this bias, only those events with magnitude at least equal to the magnitude of completeness of the sample are regarded.

A mapping procedure is used in order to highlight spatial variations of seismogenic depths, considering, for each point in the map, the subsample of its closest earthquakes. Uncertainties arising from the finite sample size are dealt with by using bootstrap.

Each hypocentral location is randomized in space in a Monte Carlo simulation, to take into account the reported location uncertainties. Also, crustal earthquakes can be considered separately from deeper ones, by truncating the hypocentral depth distribution with a Moho model for which the uncertainty can also be taken into account.

This procedure allows calculating statistically robust maps of the seismogenic depths with a realistic treatment of their uncertainties, as exemplified with the analysis of a regional seismic catalogue.

How to cite: González, Á.: Robust estimation of seismogenic depths and their uncertainties, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-6149, https://doi.org/10.5194/egusphere-egu24-6149, 2024.

12:20–12:30

Posters on site: Mon, 15 Apr, 16:15–18:00 | Hall X2

Display time: Mon, 15 Apr 14:00–Mon, 15 Apr 18:00
Chairpersons: Sabin Zahirovic, Nicola Piana Agostinetti
X2.91
|
EGU24-2270
|
ECS
|
Wen Du, James Ogg, Gabriele Ogg, Rebecca Bobick, Jacques LeBlanc, Monica Juvane, Dércio José Levy, Aditya Sivathanu, Suyash Mishra, Yuzheng Qian, and Sabrina Chang

It is a challenge to obtain information about the geologic formations and their succession in Africa due to lack of on-line lexicons for most regions.  Therefore, we established AfricaLex as a free public online database that includes details on the geologic formations in all major basins, onshore and offshore, of Africa.

AfricaLex (https://africalex.geolex.org/) offers search for geologic formations in the database by standard search criteria (name, partial name, age, region, lithology keywords, or any combination), and a map-based graphic user interface with stratigraphic-column navigation. The returned entries can be displayed by-age or in alphabetical order. Each formation is color-coded based on the Geologic Time Scale 2020, and with digitized regional extent in GeoJSON format. These enable plotting of the individual formations or time-slices of all formations across Africa of a user-selected age, with each regional-extent filled with their appropriate lithologic facies pattern, onto any of three proposed plate reconstruction models with a single click.

The aim is to make information on Africa geology and its component geologic formations more to accessible to geologists and the general public from the world and for improving paleogeographic maps.  Users can obtain a view of the sediments and volcanics that were accumulating at any time across the ancient land of Africa.These lexicon systems will be interlinked to other stratigraphic and paleogeographic databases through the lUGS Deep-Time Digital Earth platform.

How to cite: Du, W., Ogg, J., Ogg, G., Bobick, R., LeBlanc, J., Juvane, M., Levy, D. J., Sivathanu, A., Mishra, S., Qian, Y., and Chang, S.: Geologic formation database for Africa with projections onto plate reconstructions, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-2270, https://doi.org/10.5194/egusphere-egu24-2270, 2024.

X2.92
|
EGU24-2425
|
ECS
ONeil Mamallapalli, Raju DSN Datla, Hongfei Hou, Bruno Granier, Nallapa Reddy Addula, Jacques LeBlanc, James Ogg, Nusrat Kamal Siddiqui, Cecilia Shafer, Gabriele Ogg, and Wen Du

In a successful collaboration with numerous regional experts on the stratigraphy of Southeast Asia and the Middle East, our international team developed cloud-based stratigraphic lexicons with graphical user-interfaces. These databases consist of the Indian Plate (indplex.geolex.org) of nearly 1000 onshore and offshore sedimentary and volcanic formations across India, Pakistan, Nepal, Bhutan, Sri Lanka, Bangladesh, and Myanmar, of southeast Asian regions (chinalex.geolex.org; thailex.geolex.org; vietlex.geolex.org; japanlex.geolex.org) with ca. 5000 formations as of January 2024), and of Middle East regions (mideastlex.geolex.org; qatarlex.geolex.org). The entries for each formation contain details on the succession of lithology, as well as the fossils present, age range, regional distribution and associated images. APIs enable easy access and integration with other applications. A comprehensive search system allows users to retrieve information on all geologic formations for a specific date or geologic stage from multiple databases. The cloud-based databases and websites can be explored through user-friendly map and stratigraphic-column interfaces generated from TimeScale Creator software.

Regional extents of each formation in GeoJSON format enables visualization as facies-pattern-filled polygons projected onto three proposed plate reconstructions of its corresponding time interval; or as time slices of regional paleogeography. These lexicon systems will be interlinked to other stratigraphic and paleogeographic databases through the lUGS Deep-Time Digital Earth platform. This comprehensive approach allows one better comprehend deep-time dynamics and gain valuable insights into the evolution of the different regions of our planet Earth.

How to cite: Mamallapalli, O., Datla, R. D., Hou, H., Granier, B., Addula, N. R., LeBlanc, J., Ogg, J., Siddiqui, N. K., Shafer, C., Ogg, G., and Du, W.: South East Asia and Middle East Geologic Formation Databases with Visualizations on Plate Reconstructions, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-2425, https://doi.org/10.5194/egusphere-egu24-2425, 2024.

X2.93
|
EGU24-7570
|
ECS
|
Yi Xu, Cheng Deng, Shuchen Cai, Bo Xue, and Xinbing Wang

The surge in academic publications mirrors the evolutionary strides of human civilization, marked by an exponential growth in their numbers. Addressing the lacuna in well-organized academic retrieval systems for geoscientists, the Geo-Literature system emerges as a transformative tool. This system, boasting a vast repository of over seven million papers and information on four million scholars, employs cutting-edge technology to reshape the landscape of academic search, analysis, and visualization within the geoscience domain.

Driven by the necessity to bridge the gap between modeling frameworks and geological constraints, Geo-Literature incorporates geoscience knowledge mining and representation technologies. Through its intelligent update and fusion system, it not only integrates new publications but also analyzes language, space, and time relationships, effectively overcoming challenges posed by knowledge ambiguity. The platform's geoscience knowledge interaction and presentation technology facilitate intelligent retrieval, recommendation systems, and the creation of comprehensive scholarly portraits.

The impact of Geo-Literature transcends conventional academic boundaries. Establishing associations, mapping key attributes, and providing hierarchical visualizations, the system assists researchers in uncovering knowledge and forming a nuanced understanding of the academic space in geosciences. Consequently, Geo-Literature not only enhances the efficiency of paper retrieval but also contributes to broader scientific goals by fostering interdisciplinary collaboration and advancing our comprehension of Earth's deep-time processes.

How to cite: Xu, Y., Deng, C., Cai, S., Xue, B., and Wang, X.: Navigating the Academic Landscape: Intelligent Retrieval Systems for Geoscience Exploration, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-7570, https://doi.org/10.5194/egusphere-egu24-7570, 2024.

X2.94
|
EGU24-13715
|
ECS
Zhixin Guo, Jianping Zhou, Guanjie Zheng, Xinbing Wang, and Chenghu Zhou

In the era of big data science, geoscience has experienced a significant paradigm shift, moving towards a data-driven approach to scientific discovery. This shift, however, presents a considerable challenge due to the plethora of geoscience data scattered across various sources. These challenges encompass data collection and collation and the intricate database construction process. Addressing this issue, we introduce a comprehensive, publicly accessible platform designed to facilitate extracting multimodal data from geoscience literature, encompassing text, visual, and tabular formats. Furthermore, our platform streamlines the search for targeted data and enables effective knowledge fusion. A distinctive feature of it is its capability to enhance the generalizability of Deep-Time Digital Earth data processing. It achieves this by customizing standardized target data and keyword mapping vocabularies for each specific domain. This innovative approach successfully overcomes the constraints typically imposed by a need for domain-specific knowledge in data processing. The platform has been effectively applied in processing diverse data sets, including mountain disaster data, global orogenic belt isotope data, and environmental pollutant data. This has facilitated substantial academic research, evidenced by developing knowledge graphs based on mountain disaster data, establishing a global Sm-Nd isotope database, and meticulous detection and analysis of environmental pollutants. The utility of our platform is further enhanced by its sophisticated network of models, which offer a cohesive multimodal understanding of text, images, and tabular data. This functionality empowers researchers to curate and regularly update their databases meticulously with enhanced efficiency. To demonstrate the platform's practical application, we highlight a case study involving compiling Sm-Nd isotope data to create a specialized database and subsequent geographic analysis. The compilation process in this scenario is comprehensive, encompassing tasks such as PDF pre-processing, recognition of target elements, human-in-the-loop annotation, and integrating multimodal knowledge. The results obtained consistently mirror patterns found in manually compiled data, thereby reinforcing the reliability and accuracy of our automated data processing tool. As a core component of the Deep-Time Digital Earth (DDE) program, our platform has significantly contributed to the field, supporting forty geoscience research teams in their endeavors and processing over 40,000 documents. This accomplishment underscores the platform's capacity for handling large-scale data and its pivotal role in advancing geoscience research in the age of big data.

How to cite: Guo, Z., Zhou, J., Zheng, G., Wang, X., and Zhou, C.: Accelerating Geoscience Research: An Advanced Platform for Efficient Multimodal Data Integration from Geoscience Literature, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-13715, https://doi.org/10.5194/egusphere-egu24-13715, 2024.

X2.95
|
EGU24-20153
|
ECS
Unveiling Research Hotspots in Geosciences: Insights from the Data-Driven DDE Scholar Report
(withdrawn)
Yu Zhao, Li Cheng, Meng Wang, Jiaxin Ding, Lyuwen Wu, and Xinbing Wang
X2.96
|
EGU24-4485
|
ECS
Haipeng Li, Han Cheng, Sabin Zahirovic, and Yisa Wang
GPlates, an open-source, cross-platform GIS software, has been pivotal in plate tectonics and paleogeography. The recent browser-based implementation of GPlates, facilitated by pyGPlates and Cesium, offers real-time rotation of online datasets. Yet, this approach encounters limitations in data rotation efficiency and integration with diverse datasets. To address this issue, we introduce the Unity-based WebGPlates (https://dplanet.deep-time.org/DPlanet/), which harnesses the capabilities of the Web Assembly and Unity framework for enhanced computing efficiency and browser-based rendering. More importantly, WebGPlates integrates with the Deep-time Digital Earth Platform, ensuring comprehensive data access and services. Our preliminary results highlight the potential of WebGPlates as an indispensable tool in paleogeographic research. We extend an invitation to the whole community to engage and collaborate utilizing this enhanced platform.

How to cite: Li, H., Cheng, H., Zahirovic, S., and Wang, Y.: WebGPlates: A Unity-based Tool For Enhancing Paleogeographic Research, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-4485, https://doi.org/10.5194/egusphere-egu24-4485, 2024.

X2.97
|
EGU24-5551
Christian Vérard, Florian Franziskakis, and Grégory Giuliani

Global Earth reconstruction maps are used as baseline information for many studies, with high-level impacts and large implications. Yet, virtually no study fundamentally question the reliability of those reconstructions. In many cases, the model the study uses is not even credited. The reason for the absence of such discussion probably lies in the fact that none of the plate tectonic / palæogeographic modellers themselves have been able so far to assess the reliability of their own maps.

Why? First, because actually, there are ‘palæo-continental’, ‘plate tectonics’, ‘palæo-environmental’, and ‘palæogeographic’-types of reconstruction and it is difficult to compare apples and oranges. Second, because the workflow, definition, standard and vocabulary used to by the modellers can be quite different. And third and overall, because data, which reconstructions are based upon, may be contradictory and modellers must make choices.

If, for example, 4 data suggest a collision at a given time and a fifth does not, can we state that the model should display a collision zone at the 80% confidence level? What geological information is undoubtedly a proof of a collision? If among the 5 data, 2 corresponds to flysch-series, 1 corresponds to S-type granite, the 4th to tectonic unconformities and structural deformation, and the 5th is the definition of a retrograde path of metamorphic P – T conditions, is it sufficient to talk about collision, and do the 5 data have the same weight in terms of uncertainties? What about if the model does not display the collision zone at time the 4 first data suggest collision, but does display collision at the next time slice in agreement with the 5th information?

Contradictory data and debatable choices will always exist, and the existence of numerous global Earth reconstruction models is thus a wealth. However, in order to talk about uncertainties and to allow some intercomparison, the modellers of the Earth reconstruction community must collaborate, form an International Panel for Earth Reconstruction (IPER), and lay the foundation for shared definitions, concepts, vocabulary, and FAIR principles. A quantification of uncertainty on past reconstructions may then possibly be achieved by intercomparison between various models.

How to cite: Vérard, C., Franziskakis, F., and Giuliani, G.: Intercomparison and Definition of Uncertainties of Deep-Time Global Earth Reconstructions: What’s the problem?, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-5551, https://doi.org/10.5194/egusphere-egu24-5551, 2024.

X2.98
|
EGU24-19320
|
Guillaume Dupont-Nivet, Jovid Aminov, Fernando Poblete, Diego Ruiz, and Haipeng Li

The ability to reconstruct the geologic evolution of the Earth as a system including the geosphere, atmosphere and biosphere interactions, is essential to understand the fate of our environment in the context of the Climate, Life and Energy crises of the new Anthropocene era. Scientists of tomorrow working on environmental changes require ever more detailed databases and maps to access and correlate the overwhelming mass of information stemming from the ongoing surge of environmental data and models. Earth System reconstructions are fundamental assets to assess potential sources and locations of key geo-resources that are now vital for the energy transition (e.g. raw materials, rare earth elements, subsurface storage, geothermal sites). Earth System reconstructions are also the best means to communicate past and future Life and Environmental evolutions, while providing consciousness of our role and situation in the immensity of Time and Nature. They convey these essential lessons in a didactic fashion for teachers and students, museums, or for governments and NGOs to make decisions and promote public awareness. Although Earth System reconstructions have long been recognized as essential, they have yet to deliver their full breakthrough potential combining various booming disciplines. As part of a large project over Asia, we review here the case of the intensely studied, yet still extremely controversial India-Asia collision with major implications on regional environmental, depositional and global climate transitions. Ongoing debates argue for radically different end-member models of the collision timing and its configuration, and of associated topographic growth in the collision zone. We present here new Asian paleogeographic reconstructions at 50 and 30 Ma that complement an existing set at 60, 40 and 20 Ma with updates. These integrate various end-members models of the India-Asia collision and associated topographic patterns and land-sea masks with implications on the locus, source and generation of resources. Results are provided online (https://map.paleoenvironment.eu/) in various model-relevant formats with associated database and discussion forums to comment an contribute to the amelioration of these maps and databases. We also present the latest developments of the user-friendly and open-source Terra Antiqua Q-GIS plugin (https://paleoenvironment.eu/terra-antiqua/) that has been used and specifically developed with new tools including data-driven and web-based applications

How to cite: Dupont-Nivet, G., Aminov, J., Poblete, F., Ruiz, D., and Li, H.: Paleogeographic evolution of Asia in the Cenozoic reconstructed with the Terra Antiqua software, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-19320, https://doi.org/10.5194/egusphere-egu24-19320, 2024.

X2.99
|
EGU24-5938
|
ECS
João Pedro Macedo Silva, Victor Sacek, and Gianreto Manatschal

The conductive heat transport in the lithosphere is less efficient than the convective heat transport in the asthenospheric mantle. Therefore, the lithosphere behaves as a thermal insulation above the asthenospheric mantle. As a consequence, the temperature in the mantle can increase, also affecting the rheological structure of the mantle, both in the asthenosphere and at the base of the lithosphere. As the thickness of the thermal lithosphere can vary laterally from less than 100 km to more than 200 km under cratonic domains, the impact of thermal insulation can vary geographically. Therefore, the variation of lithospheric thickness may affect the efficiency of the heat transport from the asthenosphere to the lithospheric mantle. Using thermo-mechanical numerical models, we investigate how lateral variation of lithospheric thickness affects the heat flow to the surface, the convective pattern inside the asthenospheric mantle and the impacts of thermal evolution of cratonic keel over time scales of hundreds of million years. We test scenarios considering different lateral positions for the cratonic keel, scenarios with relative movement between lithosphere and asthenospheric mantle to emulate lateral movement over geological time. We also test the impacts of assuming different mantle potential temperatures for the asthenosphere. Additionally, yield strength envelopes are calculated in different portions of the lithosphere in the numerical domain to assess the impact of the thermal insulation to the rheological structure of the lithosphere. The preliminary results indicate that rising/hot thermal anomalies tend to concentrate at the base of cratonic keels, which may eventually act as a weakening effect in the lithosphere. In scenarios with relative movement, we observe a systematic shift in the location of hot thermal anomalies in the opposite direction of the relative movement.

How to cite: Macedo Silva, J. P., Sacek, V., and Manatschal, G.: Dynamic interaction between thermal insulation by cratonic keels and asthenospheric convection: insights from numerical experiments, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-5938, https://doi.org/10.5194/egusphere-egu24-5938, 2024.

X2.100
|
EGU24-3558
|
ECS
Pedro Vitor Abreu Affonso, Ana Luiza Spadano Albuquerque, and André Luiz Durante Spigolon

There is an increasing availability of geoscientific exploration data for the oil and gas industry. Supporting data-driven tools have become important for the optimization and geoscientific information gain from this kind of data and thus allowing a fastest and more trustable decision making. Nonetheless, the development of this kind of technology depends on the standardization of the data and its descriptive methodologies, that many times diverges between the geoscientists and its many data sources, that recurrently comes from different scales of samples. The complexity of non-conventional reservoir, like the ones from brazilian pre-salt, increases those pre-existing difficulties. In this sense, this work evaluates the results of a semi-supervised Machine-learning methodology that was applied to the aptian carbonates of Barra Velha formation, from the Santos Basin pre-salt. This methodology follows a PU-learning approach with the utilization of the Random-forest algorithm based on public data from geological cores, side samples and geophysical data from the corresponding depths of the Barra Velha carbonates. A team of geoscientists provided a carbonate facies grouping, and this work regrouped it based on quantitative and qualitative descriptions, and in depositional criteria related for those samples, aiming to better utilize this data for Machine-learning. To deal with the fact that the samples belong for different scales and data-sources, the classified samples from geological cores were select as “labeled”, and the rest of it was defined as “unlabeled”, establishing a criteria for description of the samples and that fits the workflow for semi-supervisioned Machine-learning. Model evaluation metric were analyzed and compared to results of a regular supervisioned model approach. The results show that the overall precision of the semi-supervisioned model has increased significantly by 10% in relation to the supersivioned methodology, and critical suggestions were made based on the results for motivation of future researches from this topic.

How to cite: Abreu Affonso, P. V., Spadano Albuquerque, A. L., and Durante Spigolon, A. L.: Semi-Supervised Machine Learning for Predicting Lacustrine Carbonate Facies in theBarra Velha Formation, Santos Basin, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-3558, https://doi.org/10.5194/egusphere-egu24-3558, 2024.

X2.101
|
EGU24-11679
|
ECS
Constraining filtered Earth models using Backus Gilbert SOLA inferences
(withdrawn)
Marin Adrian Mag
X2.102
|
EGU24-16160
|
ECS
Estimating moment tensor solutions and related uncertainties through MCMTpy waveform inversion package
(withdrawn)
Thomas Mancuso, Cristina Totaro, and Barbara Orecchio
X2.103
|
EGU24-16434
|
ECS
Joost Hase, Florian M. Wagner, Maximilian Weigand, and Andreas Kemna

The probabilistic formulation of geoelectric and induced polarization inverse problems using Bayes’ theorem inherently accounts for data errors and uncertainties in the prior assumptions, both of which are propagated naturally into the solution. Due to the non-linearity of the physics underlying the geoelectric forward calculation, the inverse problem must be solved numerically. Markov chain Monte Carlo (MCMC) methods provide the capability to create a sample of the corresponding posterior distribution, based on which statistical estimators of interest can be approximated. In a typical geoelectric imaging application, the subsurface is discretized as a 2-D mesh with the model parameters representing the averaged values of the imaged electrical conductivity within the individual cells. The resulting model space is often of high dimensionality and usually insufficiently resolved by the measurements, posing a challenge to the efficient application of MCMC methods. In our work, we use the Hamiltonian Monte Carlo (HMC) method to sample from the posterior distribution and operate on a reduced model space to enhance the efficiency of the inversion. The basis of the reduced model space is constructed via a principal component analysis of the model prior term. We consider different resolution measures to ensure that the information lost by operating in the reduced model space is negligible. In addition to the inversion of electrical resistivity tomography measurements in real variables, we also demonstrate the model space reduction and subsequent application of HMC for the solution of the complex resistivity tomography inverse problem in complex variables, imaging the distribution of the complex electrical conductivity in the subsurface. This study contributes to the needed increase of uncertainty quantification in the inversion of geoelectric and induced polarization measurements, aiming to provide a reliable basis for the processing and interpretation of geophysical imaging results.

How to cite: Hase, J., Wagner, F. M., Weigand, M., and Kemna, A.: Probabilistic inversion of geoelectric and induced polarization measurements on reduced model spaces using Hamiltonian Monte Carlo, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-16434, https://doi.org/10.5194/egusphere-egu24-16434, 2024.

X2.104
|
EGU24-14483
Jiawen He, Juerg Hauser, Malcolm Sambridge, Fabrizio Magrini, Andrew Valentine, and Augustin Marignier

Inference problems within the geosciences vary significantly in size and scope, ranging from the detection of data trends through simple linear regressions, to the construction of complex 3D models representing the Earth’s interior structure. Successfully solving an inverse problem typically requires combining various types of data sets, each associated with its own forward solver. In the absence of established software, many researchers and practitioners resort to developing bespoke inversion and parameter estimation algorithms tailored to their specific needs. However, this practice does not promote reproducibility and necessitates a substantial amount of work that is frequently beyond the primary objectives of the research.

Our aim with CoFI (pronounced: coffee), the Common Framework for Inference, is to capture inherent commonalities present in all types of inverse problems, independent of the specific methods employed to solve them. CoFI is an open-source Python package that provides a link to reliable and sophisticated third-party packages, such as SciPy and PyTorch, to tackle inverse problems of a broad range. The modular and object-oriented design of CoFI, supplemented by our comprehensive suite of tutorials and practical examples, ensures its accessibility to users of all skill levels, from experts to novices. This not only has the potential to streamline research but also to support education and STEM training.

This poster presentation aims to give an overview of CoFI’s main features and usage through practical examples. Moreover, we hope to foster collaboration and invite contributions on inference algorithms and domain-relevant examples.

How to cite: He, J., Hauser, J., Sambridge, M., Magrini, F., Valentine, A., and Marignier, A.: CoFI - Linking geoscience inference problems with tools for their solution, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-14483, https://doi.org/10.5194/egusphere-egu24-14483, 2024.

X2.105
|
EGU24-17939
|
ECS
Zhi Yuan, Chen Gu, Yichen Zhong, Peng Wu, Zhuoyu Chen, and Borui Kang

Fracture imaging is a pivotal technique in a variety of fields including Carbon Capture, Utilization, and Storage (CCUS), geothermal exploration, and wasterwater disposal, essential for the success of the field operation and seismic hazard mitigation. However, accurate fracture imaging is challenging due to accurate fracture imaging is challenging due to the complex nature of subsurface geology, the presence of multiple overlapping signals, and the variability of fracture sizes and orientations. Additionally, limitations in the resolution of current imaging technologies and the need for high-quality data acquisition further complicate the process.

To address these challenges, we have conducted fracture imaging experiments utilizing acoustic sensors in laboratory-scale specimens with varied fracture geometries.A dynamic acquisition system involving robotic arms have been developed, enabling the flexible positioning of sensors on any part of the specimen's surface.This not only significantly reduces the time and resources required for experiments but also increases the adaptability of the process to different surface topography of specimens and fracture geometries.

In addition, we employ Bayesian optimization algorithms to enhance the efficiency of sensor placement in laboratory-scale specimens, aiming to achieve precise fracture imaging with the least number of measurements necessary. This algorithmic approach optimizes the data collection process, ensuring that we gather the most relevant and accurate information with minimal intrusion. The collected data is then rigorously compared and calibrated against findings from numerical simulations, which helps in refining the algorithm for broader applications.

How to cite: Yuan, Z., Gu, C., Zhong, Y., Wu, P., Chen, Z., and Kang, B.: Bayesian optimal experimental design for fracture imaging, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-17939, https://doi.org/10.5194/egusphere-egu24-17939, 2024.

Posters virtual: Mon, 15 Apr, 14:00–15:45 | vHall X2

Display time: Mon, 15 Apr 08:30–Mon, 15 Apr 18:00
vX2.3
|
EGU24-13884
|
ECS
|
|
Fuad Hasan, Sabarethinam Kameshwar, Rubayet Bin Mostafiz, and Carol Friedland

The study focuses on evaluating and comparing different flood risk factors that correlate with each other and affect the probability of flooding. Previous research is limited to identifying these factors’ influence on specific flood events. In contrast, buildings are constructed based on design flood maps, such as the 100/500-year return period flood map in the United States. Therefore, it is important to compare risk factors obtained from historical events and flood maps to identify any missing flood risk factors. To this end, a study was conducted to determine the difference between the probability of flooding and associated factors from a historic 2016 flood event in Baton Rouge, Louisiana, with the 100-year return period Federal Emergency Management Agency (FEMA)  flood map using a Bayesian network. The Bayesian network approach was used for this study due to its transparent forward and backward inference capabilities. The potential flood risk factors (population, household income, land cover, race, rainfall, river, and road proximity, and topography) were identified and corresponding data was preprocessed in ArcGIS to convert them as raster files of the same extent, and coordinate system. The factors were also classified based on different approaches (i.e., equalization, percentile, k-means clustering, etc.) to identify the most suitable classification method. A likelihood maximization-based parameter learning approach was used to obtain the conditional probability tables in the Bayesian network. This approach was used to develop separate Bayesian networks for the 2016 flood and the 100-year flood map. After setting up the Bayesian networks, sensitivity analysis, influential strength, and correlation matrix were generated, which were used to identify the most important flood risk factors. E.g., it was observed that land cover,topography, and river proximity are highly influential to the probability of flooding.

How to cite: Hasan, F., Kameshwar, S., Mostafiz, R. B., and Friedland, C.: Bayesian network based evaluation and comparison of the urban flood risk factors for the 2016 flood and a 100-year return period flood event in Baton Rouge, Louisiana , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-13884, https://doi.org/10.5194/egusphere-egu24-13884, 2024.

vX2.4
|
EGU24-16101
Peng Yu, Cheng Deng, Huawei Ji, and Ying Wen
Extracting information from unstructured and semi-structured geoscience literature is a crucial step in conducting geological research. The traditional machine learning extraction paradigm requires a substantial amount of high-quality manually annotated data for model training, which is time-consuming, labor-intensive, and not easily transferable to new fields. Recently, large language models (LLMs) (e.g., ChatGPT, GPT-4, and LLaMA), have shown great performance in various natural language processing (NLP) tasks, such as question answering, machine translation, and text generation. A substantial body of work has demonstrated that LLMs possess strong in-context learning (ICL) and even zero-shot learning capabilities to solve downstream tasks without specifically designed supervised fine-tuning.
In this paper, we propose utilizing LLMs for geoscience literature information extraction. Specifically, we design a hierarchical PDF parsing pipeline and an automated knowledge extraction process, which can significantly reduce the need for manual data annotation, assisting geoscientists in literature data mining. For the hierarchical PDF parsing pipeline, firstly, a document layout detection model fine-tuned on geoscience literature is employed for layout detection, obtaining layout detection information for the document. Secondly, based on the document layout information, an optical character content parsing model is used for content parsing, obtaining the text structure and plain text corresponding to the content. Finally, the text structure and plain text are combined and reconstructed to ultimately obtain the parsed structured data. For the automated knowledge extraction process, firstly, the parsed long text is segmented into paragraphs to adapt to the input length limit of LLMs. Subsequently, a few-shot prompting method is employed for structured knowledge extraction, encompassing two tasks: attribute value extraction and triplet extraction. In attribute value extraction, prompts are generated automatically by the LLMs based on the subdomain and attribute names, facilitating the location and extraction of values related to subdomain attribute names in the text. For triplet extraction, the LLMs employ a procedural approach to entity extraction, entity type extraction, and relation extraction, following the knowledge graph structure pattern. Finally, the extracted structured knowledge is stored in the form of knowledge graphs, facilitating further analysis and integration of various types of knowledge from the literature.
Our proposed approach turns out to be simple, flexible, and highly effective in geoscience literature information extraction. Demonstrations of information extraction in subdomains such as radiolarian fossils and fluvial facies have yielded satisfactory results. The extraction efficiency has significantly improved, and feedback from domain experts indicates a relatively high level of accuracy in the extraction process. The extracted results can be used to construct a foundational knowledge graph for geoscience literature, supporting the comprehensive construction and efficient application of a geoscience knowledge graph.

How to cite: Yu, P., Deng, C., Ji, H., and Wen, Y.: Utilizing Large Language Models for Geoscience Literature Information Extraction, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-16101, https://doi.org/10.5194/egusphere-egu24-16101, 2024.