The leading-edge computational and data facilities of the forthcoming Exascale era will bring a variety of currently inaccessible Solid Earth computational challenges within reach. Firstly, many Geoscience calculations that are currently unaffordable due to the size of the computational domain, necessary model resolution, or insurmountable data requirements, will become increasingly tractable. Secondly, Exascale supercomputing will facilitate probabilistic framework approaches to ever larger and more complex problems, through larger ensembles of model realizations and incorporating high-end data inversion, model data assimilation, and uncertainty quantification. Finally, Urgent High Performance Computing will become a reality with complex numerical simulations, potentially with large model ensembles, becoming possible in near real-time. Numerous natural hazards which pose a direct threat to human life and critical infrastructure (e.g. earthquakes, volcanic eruptions, wildfire, landslides, and tsunamis) can require rapid and well-informed decision making in the emergency management process. The basis for these decisions is often provided by complex and data-intensive numerical models and we face a challenge of designing and implementing robust and powerful workflows (including computing, data management, sharing and logistics, and post processing) which present stakeholders with relevant and accurate results in a timely manner. This transdisciplinary session seeks contributions related to the preparation of codes for Exascale, geoscience workflows and services, adapting codes for emerging hybrid hardware architectures, e-services demanding Urgent HPC, early warning and forecasts for geohazards, hazard assessment, and high-performance data analytics. Examples include codes and workflows for near real-time seismic simulations, full-waveform seismic inversion, ensemble-based forecasts, faster than real-time tsunami simulation, magneto-hydrodynamics simulations, and physics-based hazard assessment.
This session is organized by the Center of Excellence for Exascale in Solid Earth (ChEESE) with the support of the European Plate Observatory System (EPOS), the EUDAT Collaborative Data Infrastructure (EUDAT CDI) and the Partnership for Advanced Computing in Europe (PRACE). The organisers plan to submit a proposal for an Advances in Geosciences (ADGEO) EGU General Assembly special volume on one or more EGU Divisions.
Many problems in modern geosciences require vast and complex numerical models. These may require great volumes of data and complex data logistics to resolve geophysical processes over many scales, vast numbers of simulations to adequately model uncertainty, or urgent computation to forecast impending hazards. Such applications require High Performance Computing (HPC) and/or Data Analysis (HPDA). On the verge of Exascale computing, this transdisciplinary session seeks to close the gap between geoscience needs and the codes, workflows, and data logistics needed to exploit Exascale HPC.
vPICO presentations: Thu, 29 Apr
Modelling atmospheric dispersion and deposition of volcanic ash is becoming increasingly valuable for understanding the potential impacts of explosive volcanic eruptions on infrastructures, air quality and aviation. The generation of high-resolution forecasts depends on the accuracy and reliability of the input data for models. Uncertainties in key parameters such as eruption column height injection, physical properties of particles or meteorological fields, represent a major source of error in forecasting airborne volcanic ash. The availability of nearly real time geostationary satellite observations with high spatial and temporal resolutions provides the opportunity to improve forecasts in an operational context. Data assimilation (DA) is one of the most effective ways to reduce the error associated with the forecasts through the incorporation of available observations into numerical models. Here we present a new implementation of an ensemble-based data assimilation system based on the coupling between the FALL3D dispersal model and the Parallel Data Assimilation Framework (PDAF). The implementation is based on the last version release of FALL3D (versions 8.x) tailored to the extreme-scale computing requirements, which has been redesigned and rewritten from scratch in the framework of the EU Center of Excellence for Exascale in Solid Earth (ChEESE). The proposed methodology can be efficiently implemented in an operational environment by exploiting high-performance computing (HPC) resources. The FALL3D+PDAF system can be run in parallel and supports online-coupled DA, which allows an efficient information transfer through parallel communication. Satellite-retrieved data from recent volcanic eruptions were considered as input observations for the assimilation system.
How to cite: Mingari, L., Prata, A., and Pardini, F.: Ensemble-based data assimilation of volcanic aerosols using FALL3D+PDAF, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-6774, https://doi.org/10.5194/egusphere-egu21-6774, 2021.
Campi Flegrei is an active volcano located in one of the most densely inhabited areas in Europe and under high-traffic air routes. There, the Vesuvius Observatory’s surveillance system, which continuously monitors volcanic seismicity, soil deformations and gas emissions, highlights some variations in the state of the volcanic activity. It is well known that fragmented magma injected into the atmosphere during an explosive volcanic eruption poses a threat to human lives and air-traffic. For this reason, powerful tools and computational resources to generate extensive and high-resolution hazard maps taking into account a wide spectrum of events, including those of low probability but high impact, are important to provide decision makers with quality information to develop short- and long- term emergency plans. To this end, in the framework of the Center of Excellence for Exascale in Solid Earth (ChEESE), we show the potential of HPC in Probabilistic Volcanic Hazard Assessment. On the one hand, using the ChEESE's flagship Fall3D numerical code and taking advance of the PRACE-awarded resources at CEA/TGCC-HPC facility in France, we perform thousands of simulations of tephra deposition and airborne ash concentration at different flight levels exploring the natural variability and uncertainty on the eruptive conditions on a 3D-grid covering a 2 km-resolution 2000 km x 2000 km computational domain. On the other hand, we create short- and long-term workflows, by updating current Bayesian-Event-Tree-Analysis-based prototype tools, to make them capable of analyze the large amount of information generated by the Fall3D simulations that finally gives rise to the hazard maps for Campi Flegrei.
How to cite: Martínez Montesinos, B., Titos, M., Sandri, L., Barsotti, S., Macedonio, G., and Costa, A.: Probabilistic Tephra Hazard Assessment of Campi Flegrei, Italy, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-7595, https://doi.org/10.5194/egusphere-egu21-7595, 2021.
Traditional Probabilistic Seismic Hazard Analysis (PSHA) estimates the level of earthquake ground shaking that is expected to be exceeded with a given recurrence time on the basis of historical earthquake catalogues and empirical and time-independent Ground Motion Prediction Equations (GMPEs). The smooth nature of GMPEs usually disregards some well known drivers of ground motion characteristics associated with fault rupture processes, in particular in the near-fault region, complex source-site propagation of seismic waves, and sedimentary basin response. Modern physics-based earthquake simulations can consider all these effects, but require a large set of input parameters for which constraints may often be scarce. However, with the aid of high-performance computing (HPC) infrastructures the parameter space may be sampled in an efficient and scalable manner allowing for a large suite of site-specific ground motion simulations that approach the center, body and range of expected ground motions.
CyberShake is a HPC platform designed to undertake physics-based PSHA from a large suite of earthquake simulations. These simulations are based on seismic reciprocity, rendering PSHA computationally tractable for hundreds of thousands potential earthquakes. For each site of interest, multiple kinematic rupture scenarios, derived by varying slip distributions and hypocenter location across the pre-defined fault system, are generated from an input Earthquake Forecast Model (EFM). Each event is simulated to determine ground motion intensities, which are synthesized into hazard results. CyberShake has been developed by the Southern California Earthquake Center, and used so far to assess seismic hazard in California. This work focuses on the CyberShake migration to the seismic region of South Iceland (63.5°- 64.5°N, 20°-22°W) where the largely sinistral East-West transform motion across the tectonic margin is taken up by a complex array of near-vertical and parallel North-South oriented dextral transform faults in the South Iceland Seismic Zone (SISZ) and the Reykjanes Peninsula Oblique Rift (RPOR). Here, we describe the main steps of migrating CyberShake to the SISZ and RPOR, starting by setting up a relational input database describing potential causative faults and rupture characteristics, and key sites of interest. To simulate our EFM, we use the open source code SHERIFS, a logic-tree method that converts the slip rates of complex fault systems to the corresponding annual seismicity rate. The fault slip rates are taken from a new 3D physics-based fault model for the SISZ-RPOR transform fault system. To validate model and simulation parameters, two validation steps using key CyberShake modeling tools have been carried out. First, we perform simulations of historical earthquakes and compare the synthetics with recorded ground motions and results from other forward simulations. Second, we adjust the rupture kinematics to make slip distributions more representative of SISZ-type earthquakes by comparing with static slip distributions of past significant earthquakes. Finally, we run CyberShake and compare key parameters of the synthetic ground motions with new GMPEs available for the study region. The successful migration and use of CyberShake in South Iceland is the first step of a full-scale physics-based PSHA in the region, and showcases the implementation of CyberShake in new regions.
How to cite: Rojas, O., Rodriguez, J. E., de la Puente, J., Callaghan, S., Abril, C., Halldorsson, B., Li, B., Gabriel, A. A., and Olsen, K.: Towards physics-based PSHA using CyberShake in the South Iceland Seismic Zone, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-7880, https://doi.org/10.5194/egusphere-egu21-7880, 2021.
Most of the new supercomputers now use acceleration technology such as GPUs. They promise much higher performance than traditional CPU-only servers, both in terms of floating point operation throughput and memory bandwidth. Furthermore, the electric consumption is significantly reduced, resulting in lower carbon emissions.
However, such high computation speeds can only be achieved if a set of more or less stringent rules are followed with respect to memory access and program flow. As a consequence some algorithms more easily approach peak performance.
Here, we present the results of an effort to achieve high performance on recent nvidia GPU accelerators for the spherical harmonic transform. The spherical harmonic transform can be split into a Legendre transform (which is compute bound) and a Fourier transform (which is memory bound).
By taking advantage of recent algorithmic improvements as well as by tuning the Fourier transform, the can now compute a full forward or backward spherical harmonic transform up to degree 8191 on a single 16GB Volta GPU in less than 0.35 seconds.
For lower resolution (up to degree 1023), a single Volta GPU performs a full transform more than 3 times faster than a 48-cores dual socket Skylake Xeon Platinum server.
We also present results of an ongoing effort to port the (simulation of planetary core fluid and magnetic field dynamics) to GPU-accelerated computers.
How to cite: Schaeffer, N.: Efficient spherical harmonic transforms on GPU and its use in planetary core dynamics simulations, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-13680, https://doi.org/10.5194/egusphere-egu21-13680, 2021.
Modern digital seismic networks record a wealth of high-quality continuous waveforms that contain a variety of signals associated to a wide range of seismic sources (e.g., earthquakes, volcanic, tectonic tremors, environmental sources) that probe transient energy release processes. Efficient and automatic detection, location and characterization of these different seismic sources is critical to understand slowly-driven evolution of active tectonic and volcanic systems toward catastrophic events. Developing a common analysis framework for systematic exploration of the increasing wealth of seismic observation streams is important for improving seismic monitoring systems and extracting large and accurately resolved seismic source catalogues.
To this end, we present a scalable parallelization with PyCOMPSs (Tejedor et al., 2017) of the python-based BackTrackBB data-streaming workflow (Poiata et al., 2016; 2018) for automatic detection and location of seismic sources from continuous waveform streams recorded by large seismic networks. This allows achieving an efficient distribution and orchestration of BackTrackBB code on different architectures. PyCOMPSs is a task-based programming model for python applications that relies in a powerful runtime able to extract dynamically the parallelism among tasks and executing them in distributed environments (e.g. HPC Clusters, Cloud infrastructures, etc.) transparently to the users.
We will provide details of the PyCOMPSs-based BackTrackBB workflow implementation. Results of scalability tests and memory usage analysis will be also discussed. Tests have been performed, in the context of the European Centre Of Excellence (CoE) ChEESE for Exascale computing in solid earth sciences, on the MareNostrum4 High-Performance computer of the Barcelona Supercomputing Centre, using large-scale datasets of synthetic and real-case seismological continuous waveform data sets.
How to cite: Poiata, N., Conejero, J., Badia, R. M., and Vilotte, J.-P.: Data-streaming workflow for seismic source location with PyCOMPSs parallel computational framework, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-15141, https://doi.org/10.5194/egusphere-egu21-15141, 2021.
In the last years, the interest in three-dimensional physico-mathematical models for volcanic plumes has grown, motivated by the need of predicting accurately the dispersal patterns of volcanic ash in the atmosphere (to mitigate the risks for civil aviation and for the nearby inhabited regions) and pushed by improved remote sensing techniques and measurements. However, limitations due to the mesh resolution and numerical accuracy as well as the complexity entailed model formulations, have so far prevented a detailed study of turbulence in volcanic plumes at high resolution. Eruptive columns are indeed multiphase gas-particle turbulent flows, in which the largest (integral) scale is in the order of tens or hundreds of kilometers and the smallest scale is of the order of microns. Performing accurate numerical simulations of such phenomena remains therefore a challenging task.
Modern HPC resources and recent model developments enable the study of multiphase turbulent structures of volcanic plumes with an unprecedented level of detail. However, a number of issues of the present model implementation need to be addressed in order to efficiently use the computational resources of modern supercomputing machines. Here we present an overview of an optimization strategy that allows us to perform large parallel simulations of volcanic plumes using ASHEE, a numerical solver based on OpenFOAM and one of the target flagship codes of the project ChEESE (Centre of Excellence for Exascale in Solid Earth). Such optimizations include: mixed precision floating point operations to increase computational speed and reduce memory usage, optimal domain decomposition for better communication load balancing and asynchronous I/O to hide I/O costs. Scaling analysis and volcanic plume simulations are presented to demonstrate the improvement in both computational performances and computing capability.
How to cite: Brogi, F., Amati, G., Boga, G., Castellano, M., Gracia, J., Esposti Ongaro, T., and Cerminara, M.: Optimization strategies for efficient high-resolution volcanic plume simulations with OpenFOAM, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-15310, https://doi.org/10.5194/egusphere-egu21-15310, 2021.
Seismic wave propagation is currently computationally prohibitive at high frequencies relevant for earthquake engineering or for civil protection purposes (up to 10 Hz). Developments of computational high-performance computing (HPC) infrastructures, however, will render routine executions of high-frequency simulations possible, enabling new approaches to assess seismic hazard - such as Seismic Urgent Computing (UC) in the immediate aftermath of an earthquake. The high spatial resolution of near-real time synthetic wavefields could complement existing live data records where dense seismic networks are present or provide an alternative to live data in regions with low coverage. However, time to solution for local near-field simulations accounting for frequencies above 1 Hz, as well as availability of substantial computational resources pose significant challenges that are incompatible with the requirements of decision makers. Moreover, the simulations require fine tuning of the parameters, as uncertainties in the underlying velocity model and in earthquake source information translate into uncertainties in final results. Estimating such uncertainties on ground motion proxies is non-trivial from a scientific standpoint, especially for the higher frequencies that remain an uncharted territory. In this talk we wish to address some of these key challenges and present our progress in the design and development of a prototype of a Seismic UC service. In the long run, we hope to demonstrate that deterministic modelling of ground motions can indeed in the future contribute to the short-term assessment of seismic hazard.
How to cite: Pienkowska, M., Rodríguez, J. E., de la Puente, J., and Fichtner, A.: Deterministic modelling of seismic waves in the Urgent Computing context: progress towards a short-term assessment of seismic hazard, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-15516, https://doi.org/10.5194/egusphere-egu21-15516, 2021.
The Tjörnes Fracture Zone (TFZ) in North Iceland is the largest and most complex zone of transform faulting in Iceland, formed due to a ridge-jump between two spreading centers of the Mid-Atlantic Ridge, the Northern Volcanic Zone and Kolbeinsey Ridge in North Iceland. Strong earthquakes (Ms>6) have repeatedly occurred in the TFZ and affected the North Icelandic population. In particular the large historical earthquakes of 1755 (Ms 7.0) and 1872 (doublet, Ms 6.5), have been associated with the Húsavı́k-Flatey Fault (HFF), which is the largest linear strike-slip transform fault in the TFZ, and in Iceland. We simulate fault rupture on the HFF and the corresponding near-fault ground motion for several potential earthquake scenarios, including scenario events that replicate the large 1755 and 1872 events. Such simulations are relevant for the town of Húsavı́k in particular, as it is located on top of the HFF and is therefore subject to the highest seismic hazard in the country. Due to the mostly offshore location of the HFF, its precise geometry has only recently been studied in more detail. We compile updated seismological and geophysical information in the area, such as a recently derived three-dimensional velocity model for P and S waves. Seismicity relocations using this velocity model, together with bathymetric and geodetic data, provide detailed information to constrain the fault geometry. In addition, we use this 3D velocity model to simulate seismic wave propagation. For this purpose, we generate a variety of kinematic earthquake-rupture scenarios, and apply a 3D finite-difference method (SORD) to propagate the radiated seismic waves through Earth structure. Slip distributions for the different scenarios are computed using a von Karman autocorrelation function whose parameters are calibrated with slip distributions available for a few recent Icelandic earthquakes. Simulated scenarios provide synthetic ground motion and time histories and estimates of peak ground motion parameters (PGA and PGV) at low frequencies (<2 Hz) for Húsavík and other main towns in North Iceland along with maps of ground shaking for the entire region [130 km x 110 km]. Ground motion estimates are compared with those provided by empirical ground motion models calibrated to Icelandic earthquakes and dynamic fault-rupture simulations for the HFF. Directivity effects towards or away from the coastal areas are analyzed to estimate the expected range of shaking. Thick sedimentary deposits (up to ∼4 km thick) located offshore on top of the HFF (reported by seismic, gravity anomaly and tomographic studies) may affect the effective depth of the fault's top boundary and the surface rupture potential. The results of this study showcase the extent of expected ground motions from significant and likely earthquake scenarios on the HFF. Finite fault earthquake simulations complement the currently available information on seismic hazard for North Iceland, and are a first step towards a systematic and large-scale earthquake scenario database on the HFF, and for the entire fault system of the TFZ, that will enable comprehensive and physics-based hazard assessment in the region.
How to cite: Abril, C., Mai, M., Halldórsson, B., Li, B., Gabriel, A., and Ulrich, T.: Ground motion simulations for finite-fault earthquake scenarios on the Húsavík-Flatey Fault, North Iceland, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-15557, https://doi.org/10.5194/egusphere-egu21-15557, 2021.
Among other natural hazards, the occurrence and impact of extreme magnitude earthquakes are of great interest both from the scientific and societal points of view. The scarcity of observational instrumental data for these type of events, as well as the urgent need to take mitigation measures to minimize their effects on human life and critical infrastructure have required the development of computational codes for the modeling of the propagation of these events.
Examples of the realistic modeling of the propagation of extreme magnitude earthquakes that can be achieved by the use of powerful HPC facilities and 3D finite difference Fortran codes have been presented by Cabrera et al. 2007 and Chavez et al. 2016. These large-scale scientific simulations generate vast amount of data, writing such data out to storage step-by-step is very slow and requires expensive I/O post-processing procedures for their analyses. However, the current and foreseen major advances occurring in Exascale HPC systems offer a transformational approach to the research community, as well as the possibility for the latter of contributing to the solution of urgent and complex problems that society is or will be facing in the years to come.
Taking into account the future exascale developments and in order to speed-up in situ analysis, i.e., analyze data at the same time simulations are running, in this ongoing research we present the main computational characteristics of the hybrid system we are developing for the near real-time simulation and visualization of the propagation of the realistic modeling of the 3D wave propagation of extreme magnitude earthquakes. The system is based on the updated version the staggered finite difference Fortran code 3DWPFD, coupled with an efficient visualization C++ code. The system is being developed in the hybrid HPC Miztli of UNAM, Mexico, made up of CPUs (8344 cores) + GPUs (16 NVIDIA m2090 and 8 V100). We expect to fully adapt the code for emerging hybrid Exascale architectures in the near future. Examples of the results obtained by using the hybrid system for the modeling of the propagation of the extreme magnitude Mw 8.2 earthquake occurred the 7 September 2017 in southern Mexico will be presented.
How to cite: Cabrera Flores, E. C., Chavez, M., and Salazar, A.: A hybrid system for the near real-time modeling and visualization of extreme magnitude earthquakes, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-16508, https://doi.org/10.5194/egusphere-egu21-16508, 2021.
We are sorry, but presentations are only available for users who registered for the conference. Thank you.
We are sorry, but presentations are only available for users who registered for the conference. Thank you.
We are sorry, but presentations are only available for users who registered for the conference. Thank you.