EGU21-6786, updated on 28 Apr 2023
https://doi.org/10.5194/egusphere-egu21-6786
EGU General Assembly 2021
© Author(s) 2023. This work is distributed under
the Creative Commons Attribution 4.0 License.

Implementation of a numerical methodology for the stochastic characterization of the Valdivia 1960 9.5 Mw tsunami source.

Rodrigo Cifuentes-Lobos1, Ignacia Calisto1, Cristian Saavedra2, Franchesca Ormeño1, Javiera San Martín1, and Matías Fernandez1
Rodrigo Cifuentes-Lobos et al.
  • 1Departamento de Geofísica, Universidad de Concepción, Chile
  • 2Departamento Ciencias de la Tierra, Universidad de Concepción, Chile.


Probabilistic Tsunami Hazard Assessment (PTHA) brings a variety of mathematical and numerical tools for evaluating long-term exposure to tsunami related hazards in coastal communities, within which the logic tree method stands out for its usefulness in generating random slip models and dealing with epistemic and aleatory uncertainties. Key items for the stochastic study of tsunami scenarios. This method, by combining parameters that define a source model (such as magnitude, and rupture limits), allows to create a vast number of random source models that, as well as they can be used for assessing future and long-term hazard, they can also be used in conjunction with data and observations obtained from past tsunamis and earthquakes in their study.

This study proposes a numerical methodology for the generation of random tsunami source models, based on the logic tree method, for studying paleo tsunamis and historical tsunamis. In this case this methodology will be tested with data from the great Valdivia 1960 9.5 Mw earthquake and tsunami. These random source models are then filtered using empirical relations between magnitudes and rupture dimensions or rupture aspect ratios. Those models that pass this filter are then used to compute deformation using the Okada, 1985 method. This deformation fields are filtered using geodetic data and observations associated with the event of interest, eliminating all models that doesn’t satisfy these observations. In contrast, all models that pass this filter, are used as inputs to model tsunami using a staggered scheme, first modelling with low resolution topobathymetry grids, in order to assess if tsunami waves are registered in locations that are known to have been inundated and eliminate the models that do not show this behaviour. And secondly, using the deformation models that satisfy this past filter as input, high resolution grids are used to model tsunami and appraise the estimated run up of inundations and compare it with reliable historical accounts and sedimentological observations. Those models that pass all the filters mentioned above, will be subjects to statistical analysis to compare them with existent models of the Valdivia 1960 earthquake.

As it was stated above, and due to the important number of published studies, data and historical accounts, and source models available, the Valdivia 1960 9.5 Mw earthquake will be used as a benchmark to test this methodology, in order to appraise the convergence of the random models that pass every filter to the existent source models. It is important to further specify that this methodology was designed to study historical and paleo tsunamis, and will only be tested with modern tsunamis, such as Valdivia 1960.

How to cite: Cifuentes-Lobos, R., Calisto, I., Saavedra, C., Ormeño, F., San Martín, J., and Fernandez, M.: Implementation of a numerical methodology for the stochastic characterization of the Valdivia 1960 9.5 Mw tsunami source., EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-6786, https://doi.org/10.5194/egusphere-egu21-6786, 2021.