ITS4.3/NH1
Data Science and Machine Learning for Geohazard

ITS4.3/NH1

EDI
Data Science and Machine Learning for Geohazard
Co-organized by GM2/HS12/SM1
Convener: Hui TangECSECS | Co-conveners: Jonathan BedfordECSECS, Fabio Corbi, Michaela WennerECSECS
vPICO presentations
| Thu, 29 Apr, 11:45–12:30 (CEST)

vPICO presentations: Thu, 29 Apr

Chairpersons: Hui Tang, Jonathan Bedford, Michaela Wenner
11:45–11:47
|
EGU21-1304
|
ECS
|
Vivien Zahs, Benjamin Herfort, Julia Kohns, Tahira Ullah, Katharina Anders, Lothar Stempniewski, Alexander Zipf, and Bernhard Höfle

Timely and reliable information on earthquake-induced building damage plays a critical role for the effective planning of rescue and remediation actions. Automatic damage assessment based on the analysis of 3D point cloud (e.g. from photogrammetry or LiDAR) or georeferenced image data can provide fast and objective information on the damage situation within few hours. So far, studies are often limited to the distinction of only two damage classes (e.g. damaged or not damaged) and to information provided by 2D image data. Beyond-binary assessment of multiple grades of damage is challenging, e.g. due to the variety of damage characteristics and the limited transferability of trained algorithms to unseen data and other geographic regions. The detailed damage assessment based on full 3D information is, however, required to enable efficient use and distribution of resources and for evaluation of structural stability of buildings. Further, the identification of slightly damaged buildings is essential to estimate the vulnerability for severe damage in potential aftershock events.

In our work, we propose an interdisciplinary approach for timely and reliable assessment of multiple building-specific damage grades (0-5) from post- (and pre-) event UAV point clouds and images with high resolution (centimeter point spacing or pixel size). We combine expert knowledge of earthquake engineers with fully automatic damage classification and human visual interpretation from web-based crowdsourcing. While automatic approaches enable an objective and fast analysis of large 3D data, the ability of humans to visually interpret details in the data can be used as (1) validation of the automatic classification and (2) alternative method where the automatic approach showed high levels of uncertainty.

We develop a damage catalogue that categorizes typical geometric and radiometric damage patterns for each damage grade. Therein, we consider influences of building material and region-specific building design on damage characteristics. Moreover, damage patterns include observations of previous earthquakes to ensure practical applicability. The catalogue serves as decision basis for the automatic classification of building-specific damage using machine learning, on the one hand. On the other hand, the catalogue is used to design quick and easy single damage mapping tasks that can be solved by volunteers within seconds (Micro-Mapping, Herfort et al. 2018). A further novelty of our approach consists in the combination of strengths of machine learning approaches for point cloud-based damage classification and visual interpretation by human contributors through Micro-Mapping tasks. The optimal combination of operation and weighted fusion of both methods is thereby dependent on event-specific conditions (e.g. data availability and quality, temporal constraints, spatial scale, extent of damage). 

By considering observations from previous earthquakes and influences of building design and structure on potential damage characteristics, our approach shall be applicable to events in different geographic regions. By the combination of automated and crowdsourcing methods, reliable and detailed damage information at the scale of large cities shall be provided within a few days. 

 

References

Herfort, B., Höfle, B. & Klonner, C. (2018): 3D micro-mapping: Towards assessing the quality of crowdsourcing to support 3D point cloud analysis. ISPRS Journal of Photogrammetry and Remote Sensing. Vol. 137, pp. 73-83.

How to cite: Zahs, V., Herfort, B., Kohns, J., Ullah, T., Anders, K., Stempniewski, L., Zipf, A., and Höfle, B.: 3D point cloud-based assessment of detailed building damage through a combination of machine learning, crowdsourcing and earthquake engineering, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-1304, https://doi.org/10.5194/egusphere-egu21-1304, 2021.

11:47–11:49
|
EGU21-2496
Ciro Del Negro, Claudia Corradino, Eleonora Amato, Federica Torrisi, and Sonia Calvari

The persistent explosive activity of Stromboli is characterized by several hundred of moderate-intensity events per day. These explosions eject pyroclastic fragments to the height of some tens of meters, which fall a short distance from the summit vents. Occasionally, major explosions eject pyroclastic material to more than a few hundred meters high, which can fall outside the crater terrace on the area visited by tourists. The frequency of these phenomena is variable, with an average of 2 events per year. Paroxysms, violent explosions that produce eruptive columns more than 3 km high and are often associated with pyroclastic flows, can also occur at Stromboli. Ballistic blocks associated with these explosions can reach up to 4 m in diameter and fall on the hinabited areas. Paroxysms are rare (5 events in the last 20 years) and their occurrence frequency varies over time. Nevertheless, major explosions and paroxysms represent the main danger to visitors and inhabitants of the Stromboli Island. Here, we propose a novel approach to detect and classify the type of explosive activity occurring on Stromboli volcano by combining radar and optical satellite imagery with machine learning algorithms. In particular, we considered the plume height, the summit area temperature, and the area affected by large ballistic projectiles as the discriminant factors to distinguish between ordinary activity, major explosions and paroxysms. These factors are retrieved from both radar (Sentinel-1-GRD) and multi-spectral (Landsat-MSI and TIR) satellite images and fed to a machine learning classifier. A retrospective analysis is conducted investigating the main explosive events that have occurred since 1983. This algorithm is based on the in the Google Earth Engine (GEE), which is a cloud computing platform for environmental data analysis from local to planetary scales, with fast access and processing of satellite data from different missions.

How to cite: Del Negro, C., Corradino, C., Amato, E., Torrisi, F., and Calvari, S.: Machine learning classifiers for detecting and classifying major explosions and paroxysms at Stromboli volcano (Italy) using radar and optical satellite imagery, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-2496, https://doi.org/10.5194/egusphere-egu21-2496, 2021.

11:49–11:51
|
EGU21-2788
|
ECS
|
René Steinmann, Leonard Seydoux, and Michel Campillo

Seismic datasets contain an enormous amount of information and a large variety of signals with different origins. We usually observe signatures of earthquakes, volcanic and non-volcanic tremors, rockfalls, road and air traffic, atmospheric perturbations and many other acoustic emissions. More and more seismic sensors are deployed worldwide and record the seismic wavefield in a continuous fashion, generating massive volumes of data that cannot be analyzed manually in decent times. Therefore, identifying classes of signals in seismic data with automatic strategies is a crucial stage towards the understanding of the underlying physics of geological objects. For that reason seismologists have developed different tools to detect and classify certain types of signals. Recently, machine learning gained much attention due to its ability to recognize patterns. While supervised learning is a great tool for detecting and classifying signals within already-known classes, it cannot be used to infer new classes of signals, and can be strongly biased by the labels we impose. We here propose to overcome this limitation with unsupervised learning. In this study, we present a new way to explore single-station continuous seismic data with a dendrogram produced by agglomerative clustering. Our method is motivated by the idea that labels in a seismic data set follow a hierarchical order with different levels of details. For example earthquakes belong to the larger class of stationary signals and can be also divided into subclasses with different focal mechanism or magnitudes. We first use a scattering network (a convolutional neural network that makes use of wavelet filers) in order to extract a multi-scale representation of the continuous seismic waveforms. We then select the most meaningful features by means of independent component analysis, and apply an agglomerative clustering on this representation. We finally explore the dendrogram in a systematic way in order to explore the different signal classes revealed by the strategy. We illustrate our method on seismic data continuously recorded in the vicinity of the North-Anatolian fault, in Turkey. During this time period, a seismic crisis with more than 200 micro-earthquakes occurred, together with many other anthropogenic and meteorological events. By exploring the classes revealed by the dendrogram with a posteriori signal features (occurrence, within-class correlations, etc.) we show that the strategy is capable of retrieving the seismic crisis as well as signals related to anthropogenic and meteorogical activities.

How to cite: Steinmann, R., Seydoux, L., and Campillo, M.: Hierarchical exploration of single station seismic data with unsupervised learning, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-2788, https://doi.org/10.5194/egusphere-egu21-2788, 2021.

11:51–11:53
|
EGU21-3524
|
ECS
Claudia Hulbert, Romain Jolivet, Blandine Gardonio, Paul Johnson, Christopher Ren, and Bertrand Rouet-Leduc

Active faults release tectonic stress imposed by plate motion through a spectrum of slip modes, from slow, aseismic slip, to dynamic, seismic events. Slow earthquakes are often associated with tectonic tremor, non-impulsive signals that can easily be buried in seismic noise and go undetected. 

We present a new methodology aimed at improving the detection and location of tremors hidden within seismic noise. After detecting tremors with a classic convolutional neural network, we rely on neural network attribution to extract core tremor signatures. By identifying and extracting tremor characteristics, in particular in the frequency domain, the attribution analysis allows us to uncover structure in the data and denoise input waveforms. In particular, we show that these cleaned signals correspond to a waveform traveling in the Earth's crust and mantle at wavespeeds consistent with local estimates. We then use these cleaned waveforms to locate tremors with standard array-based techniques. 

We apply this method to the Cascadia subduction zone. We analyze a slow slip event that occurred in 2018 below the southern end of the Vancouver Island, Canada, where we identify tremor patches consistent with existing catalogs. Having validated our new methodology in a well-studied area, we further apply it to various tectonic contexts and discuss the implications of tremor occurrences in the scope of exploring the interplay between seismic and aseismic slip.

How to cite: Hulbert, C., Jolivet, R., Gardonio, B., Johnson, P., Ren, C., and Rouet-Leduc, B.: Tremor Waveform Denoising and Automatic Location with Neural Network Interpretation, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-3524, https://doi.org/10.5194/egusphere-egu21-3524, 2021.

11:53–11:55
|
EGU21-3755
|
ECS
|
Muhammad Fulki Fadhillah, SeulKi Lee, and Chang-Wook Lee

Time-series InSAR techniques, such as Stanford Method for Persistent Scatterers (StaMPS) are commonly used to measure time-series surface deformation. This study presents a novel approach of optimized time series deformation analysis based on a support vector regression (SVR) algorithm and optimization Hot-Spot Analysis on persistent scatterers (PS). To examine the performances of the optimized process in time-series, we generated a synthetic interferogram using a Mogi model equation to construct a simulated surface deformation phase. Topography errors simulated orbital error and atmospheric error phases have been added to synthetic interferogram construction. All the synthetic interferogram based on Sentinel-1 SAR Image acquisition dates over Seoul, Korea. An SVR algorithm was used to find an optimum measurement point and reduce error points in time-series analysis. Then, the OHSA approach was implemented on the optimum measurement point through the analysis of Getis-Ord Gi* statistics. As the result, the optimization measurement point indicates refined results in the mean velocity deformation map and time-series graph. In addition, the detection accuracy can be improved by more than 10% with synthetic data. Then, the correlation coefficient between the optimization result and the deformation model shows a good correlation (> 0.8). Also, the standard deviation of time-series results can be reduced by more than 7% after optimizing the process. The proposed method is useful to detect a low deformation rate and can be implemented for several deformation cases.   

How to cite: Fadhillah, M. F., Lee, S., and Lee, C.-W.: Optimization of the time series surface deformation analysis using machine learning algorithms on the interferogram simulation data, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-3755, https://doi.org/10.5194/egusphere-egu21-3755, 2021.

11:55–11:57
|
EGU21-4718
|
ECS
|
Jannes Münchmeyer, Dino Bindi, Ulf Leser, and Frederik Tilmann

The estimation of earthquake source parameters, in particular magnitude and location, in real time is one of the key tasks for earthquake early warning and rapid response. In recent years, several publications introduced deep learning approaches for these fast assessment tasks. Deep learning is well suited for these tasks, as it can work directly on waveforms and can learn features and their relation from data.

A drawback of deep learning models is their lack of interpretability, i.e., it is usually unknown what reasoning the network uses. Due to this issue, it is also hard to estimate how the model will handle new data whose properties differ in some aspects from the training set, for example earthquakes in previously seismically quite regions. The discussions of previous studies usually focused on the average performance of models and did not consider this point in any detail.

Here we analyze a deep learning model for real time magnitude and location estimation through targeted experiments and a qualitative error analysis. We conduct our analysis on three large scale regional data sets from regions with diverse seismotectonic settings and network properties: Italy and Japan with dense networks (station spacing down to 10 km) of strong motion sensors, and North Chile with a sparser network (station spacing around 40 km) of broadband stations.

We obtained several key insights. First, the deep learning model does not seem to follow the classical approaches for magnitude and location estimation. For magnitude, one would classically expect the model to estimate attenuation, but the network rather seems to focus its attention on the spectral composition of the waveforms. For location, one would expect a triangulation approach, but our experiments instead show indications of a fingerprinting approach. Second, we can pinpoint the effect of training data size on model performance. For example, a four times larger training set reduces average errors for both magnitude and location prediction by more than half, and reduces the required time for real time assessment by a factor of four. Third, the model fails for events with few similar training examples. For magnitude, this means that the largest events are systematically underestimated. For location, events in regions with few events in the training set tend to get mislocated to regions with more training events. These characteristics can have severe consequences in downstream tasks like early warning and need to be taken into account for future model development and evaluation.

How to cite: Münchmeyer, J., Bindi, D., Leser, U., and Tilmann, F.: Insights into deep learning for earthquake magnitude and location estimation, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-4718, https://doi.org/10.5194/egusphere-egu21-4718, 2021.

11:57–11:59
|
EGU21-7603
|
ECS
Felix Eckel, Horst Langer, and Mariangela Sciotto

Mount Etna, Europe’s largest and most active volcano is situated close to the Metropolitan area of Catania with about 1 Million inhabitants. Continuous monitoring has therefore been carried out for decades. Among the various disciplines infrasound recordings play an important role in this context. Explosive activity near or above ground as well as shallow tremor processes are easier to identify with airborne sound waves than with seismic waves that are significantly scattered and refracted in the volcanic edifice. However, infrasound signals are often affected by noise, especially by wind noise in the summit area.

At Mount Etna five summit craters are currently known with fluctuating levels of activity. This leads to a wide variety of infrasound signal patterns interfered by changing noise levels. Manual distinction of noisy data from real volcanogenic signals brings along a considerable effort and requires expert knowledge. We therefore apply unsupervised pattern recognition techniques for this task. Extracting features from the amplitude spectrum we are able to distinguish different infrasound regimes with Self-Organizing maps (SOMs). SOMs allow to color-code the results for an intuitive interpretation and evidence the presence of transitional activity regimes. We define a reference data set from multiple months of infrasound waveforms to include as many activity regimes as possible to train the SOM. This enables a straight forward interpretation of new data.

How to cite: Eckel, F., Langer, H., and Sciotto, M.: Identification of Infrasound Regimes at Mount Etna using Pattern Recognition Techniques, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-7603, https://doi.org/10.5194/egusphere-egu21-7603, 2021.

11:59–12:01
|
EGU21-9670
|
ECS
|
Marko Njirjak, Erik Otović, Dario Jozinović, Jonatan Lerga, Goran Mauša, Alberto Michelini, and Ivan Štajduhar

The analysis of non-stationary signals is often performed on raw waveform data or on Fourier transformations of those data, i.e., spectrograms. However, the possibility of alternative time-frequency representations being more informative than spectrograms or the original data remains unstudied. In this study, we tested if alternative time-frequency representations could be more informative for machine learning classification of seismic signals. This hypothesis was assessed by training three well-established convolutional neural networks, using nine different time-frequency representations, to classify seismic waveforms as earthquake or noise. The results were compared to the base model, which was trained on the raw waveform data. The signals used in the experiment were seismogram instances from the LEN-DB seismological dataset (Magrini et al. 2020). The results demonstrate that Pseudo Wigner-Ville and Wigner-Ville time-frequency representations yield significantly better results than the base model, while Margenau-Hill performs significantly worse (P < .01). Interestingly, the spectrogram, which is often used in non-stationary signal analysis, did not yield statistically significant improvements. This research could have a notable impact in the field of seismology because the data that were previously hidden in the seismic noise are now classified more accurately. Moreover, the results might suggest that alternative time-frequency representations could be used in other fields which use non-stationary time series to extract more valuable information from the original data. The potential fields encompass different fields of geophysics, speech recognition, EEG and ECG signals, gravitational waves and so on. This, however, requires further research.

How to cite: Njirjak, M., Otović, E., Jozinović, D., Lerga, J., Mauša, G., Michelini, A., and Štajduhar, I.: Machine Learning Classification of Cohen's Class Time-Frequency Representations of Non-Stationary Signals: Effects on Earthquake Detection, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-9670, https://doi.org/10.5194/egusphere-egu21-9670, 2021.

12:01–12:03
|
EGU21-12142
|
ECS
Erik Otović, Marko Njirjak, Dario Jozinović, Goran Mauša, Alberto Michelini, and Ivan Štajduhar

In this study, we compared the performance of machine learning models trained using transfer learning and those that were trained from scratch - on time series data. Four machine learning models were used for the experiment. Two models were taken from the field of seismology, and the other two are general-purpose models for working with time series data. The accuracy of selected models was systematically observed and analyzed when switching within the same domain of application (seismology), as well as between mutually different domains of application (seismology, speech, medicine, finance). In seismology, we used two databases of local earthquakes (one in counts, and the other with the instrument response removed) and a database of global earthquakes for predicting earthquake magnitude; other datasets targeted classifying spoken words (speech), predicting stock prices (finance) and classifying muscle movement from EMG signals (medicine).
In practice, it is very demanding and sometimes impossible to collect datasets of tagged data large enough to successfully train a machine learning model. Therefore, in our experiment, we use reduced data sets of 1,500 and 9,000 data instances to mimic such conditions. Using the same scaled-down datasets, we trained two sets of machine learning models: those that used transfer learning for training and those that were trained from scratch. We compared the performances between pairs of models in order to draw conclusions about the utility of transfer learning. In order to confirm the validity of the obtained results, we repeated the experiments several times and applied statistical tests to confirm the significance of the results. The study shows when, within the set experimental framework, the transfer of knowledge brought improvements in terms of model accuracy and in terms of model convergence rate.

Our results show that it is possible to achieve better performance and faster convergence by transferring knowledge from the domain of global earthquakes to the domain of local earthquakes; sometimes also vice versa. However, improvements in seismology can sometimes also be achieved by transferring knowledge from medical and audio domains. The results show that the transfer of knowledge between other domains brought even more significant improvements, compared to those within the field of seismology. For example, it has been shown that models in the field of sound recognition have achieved much better performance compared to classical models and that the domain of sound recognition is very compatible with knowledge from other domains. We came to similar conclusions for the domains of medicine and finance. Ultimately, the paper offers suggestions when transfer learning is useful, and the explanations offered can provide a good starting point for knowledge transfer using time series data.

How to cite: Otović, E., Njirjak, M., Jozinović, D., Mauša, G., Michelini, A., and Štajduhar, I.: Intra-domain and cross-domain transfer learning for time series, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-12142, https://doi.org/10.5194/egusphere-egu21-12142, 2021.

12:03–12:05
|
EGU21-12791
|
Monique Kuglitsch

The International Telecommunication Union (ITU), World Meteorological Organization (WMO), and United Nations Environment Programme (UNEP) have recently partnered to establish the Focus Group on Artificial Intelligence for Natural Disaster Management (FG-AI4NDM). FG-AI4NDM is exploring the potential of AI-based algorithms to support data collection and handling, to improve modeling (i.e., reconstructions, forecasts, and projections) across spatiotemporal scales through extracting complex patterns (and gaining insights) from a growing volume of geospatial data, and to provide effective communication. To achieve these objectives, FG-AI4NDM is building an interdisciplinary, multi-stakeholder, and international community to explore specific natural disaster use cases. Special effort is made to support participation from low- and mid-income countries and those countries shown to be particularly impacted by these types of events. Here, we will explore: what is an ITU focus group, what are the objectives and planned deliverables of FG-AI4NDM, what progress has been made since its inception, and how members of the geoscience community can become involved.

How to cite: Kuglitsch, M.: ITU/WMO/UNEP Focus Group on AI for Natural Disaster Management: Introduction and call for participation, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-12791, https://doi.org/10.5194/egusphere-egu21-12791, 2021.

12:05–12:07
|
EGU21-13873
|
ECS
Thomas Chen

Natural disasters ravage the world's cities, valleys, and shores on a monthly basis. Having precise and efficient mechanisms for assessing infrastructure damage is essential to channel resources and minimize the loss of life. Using a dataset that includes labeled pre- and post- disaster satellite imagery, the xBD dataset, we train multiple convolutional neural networks to assess building damage on a per-building basis. In order to investigate how to best classify building damage, we present a highly interpretable deep-learning methodology that seeks to explicitly convey the most useful information required to train an accurate classification model. We also delve into which loss functions best optimize these models. Our findings include that ordinal-cross entropy loss is the most optimal loss function to use and that including the type of disaster that caused the damage in combination with a pre- and post-disaster image best predicts the level of damage caused. We also make progress in the realm of qualitative representations of which parts of the images that the model is using to predict damage levels, through gradient class-activation maps. Our research seeks to computationally contribute to aiding in this ongoing and growing humanitarian crisis, heightened by climate change. Specifically, it advances the study of more interpretable machine learning models, which were lacking in previous literature and are important for the understanding of not only research scientists but also operators of such technologies in underserved regions.

How to cite: Chen, T.: Interpretability in Convolutional Neural Networks for Building Damage Classification in Satellite Imagery, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-13873, https://doi.org/10.5194/egusphere-egu21-13873, 2021.

12:07–12:09
|
EGU21-14091
|
ECS
Individual Sick Fir Tree (Abies mariesii) Identification in Insect Infested Forests by Means of UAV Images and Deep Learning
(withdrawn)
Ha Trang Nguyen, Maximo Larry Lopez Caceres, Koma Moritake, Sarah Kentsch, Hase Shu, and Yago Diez
12:09–12:30