NH9.3
Next generation technologies and applications for disaster risk modelling and management

NH9.3

Next generation technologies and applications for disaster risk modelling and management
Co-organized by ESSI2
Convener: Rui FigueiredoECSECS | Co-conveners: Kai Schröter, Carmine Galasso, Mario Lloyd Virgilio Martina, Xavier Romão, Markus EnenkelECSECS, Clement Atzberger, Rahel Diro
vPICO presentations
| Mon, 26 Apr, 13:30–15:00 (CEST)

vPICO presentations: Mon, 26 Apr

Chairpersons: Rui Figueiredo, Xavier Romão, Kai Schröter
13:30–13:35
Understanding and modelling disaster risk
13:35–13:37
|
EGU21-8574
Christian Geiß, Patrick Aravena Pelizari, Peter Priesmeier, Angélica Rocio Soto Calderon, Elisabeth Schoepfer, Michael Langbein, Torsten Riedlinger, Hernán Santa María, Juan Camilo Gómez Zapata, Massimiliano Pittore, and Hannes Taubenböck

Exposure describes elements which are imperiled by natural hazards and susceptible to damage. The affiliated vulnerability characterizes the likelihood to experience damage regarding a given level of hazard intensity. Frequently, the compilation of exposure information is the costliest component (in terms of time and labor) in risk assessment. Existing data sets and models often describe exposure in an aggregated manner, e.g., by relying on statistical/census data for given administrative entities. Nowadays, earth observation techniques allow to collect spatially continuous information for large geographic areas while enabling a high geometric and temporal resolution. In parallel, modern data interpretation tools based on Artificial Intelligence concepts enable the extraction of thematic information from such data with a high accuracy and detail. Consequently, we exploit measurements from the earth observation missions TanDEM-X and Sentinel-2, which collect data on a global scale, to characterize the built environment in terms of fundamental morphologic properties, namely built-up density and height. Subsequently, we use this information to constrain existing exposure data in a spatial disaggregation approach. Thereby, we compare different methods for disaggregation and evaluate how different resolution properties of the earth observation data affect the risk assessment result. Results are presented for the city of Santiago de Chile, Chile, which is prone to natural hazards such as earthquakes. We present loss estimations and corresponding sensivity with respect to the resolution properties of the exposure data used in the model. Thereby, it can be noted how loss estimations vary substantially and that aggregated exposure information underestimates losses in our scenarios. As such, this study underlines the benefits of deploying modern earth observation technologies for refined exposure estimation and related loss estimation.

How to cite: Geiß, C., Aravena Pelizari, P., Priesmeier, P., Rocio Soto Calderon, A., Schoepfer, E., Langbein, M., Riedlinger, T., Santa María, H., Gómez Zapata, J. C., Pittore, M., and Taubenböck, H.: Earth Observation Techniques for Spatial Disaggregation of Exposure Data, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-8574, https://doi.org/10.5194/egusphere-egu21-8574, 2021.

13:37–13:39
|
EGU21-9903
|
Highlight
Patrick Aravena Pelizari, Christian Geiß, Elisabeth Schoepfer, Torsten Riedlinger, Paula Aguirre, Hernán Santa María, Yvonne Merino Peña, Juan Camilo Gómez Zapata, Massimiliano Pittore, and Hannes Taubenböck

Knowledge on the key structural characteristics of exposed buildings is crucial for accurate risk modeling with regard to natural hazards. In risk assessment this information is used to interlink exposed buildings with specific representative vulnerability models and is thus a prerequisite to implement sound risk models. The acquisition of such data by conventional building surveys is usually highly expensive in terms of labor, time, and money. Institutional data bases such as census or tax assessor data provide alternative sources of information. Such data, however, are often inappropriate, out-of-date, or not available. Today, the large-area availability of systematically collected street-level data due to global initiatives such as Google Street View, among others, offers new possibilities for the collection of in-situ data. At the same time, developments in machine learning and computer vision – in deep learning in particular – show high accuracy in solving perceptual tasks in the image domain. Thereon, we explore the potential of an automatized and thus efficient collection of vulnerability related building characteristics. To this end, we elaborated a workflow where the inference of building characteristics (e.g., the seismic building structural type, the material of the lateral load resisting system or the building height) from geotagged street-level imagery is tasked to a custom-trained Deep Convolutional Neural Network. The approach is applied and evaluated for the earthquake-prone Chilean capital Santiago de Chile. Experimental results are presented and show high accuracy in the derivation of addressed target variables. This emphasizes the potential of the proposed methodology to contribute to large-area collection of in-situ information on exposed buildings.

How to cite: Aravena Pelizari, P., Geiß, C., Schoepfer, E., Riedlinger, T., Aguirre, P., Santa María, H., Merino Peña, Y., Gómez Zapata, J. C., Pittore, M., and Taubenböck, H.: Street-Level Imagery and Deep Learning for Characterization of Exposed Buildings, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-9903, https://doi.org/10.5194/egusphere-egu21-9903, 2021.

13:39–13:41
|
EGU21-8396
|
ECS
|
|
Tobias Sieg and Annegret Thieken

The management of risks arising from natural hazards requires a reliable estimation of the hazards’ impact on exposed objects. The data sets used for this estimation have improved during the recent years reflecting an increasing amount of detail with regard to spatial, temporal or process information. Yet, the influence of the choice of data and the degree of detail on the estimated risk is rarely assessed.

We estimated flood damage to private households and companies for a flood event in 2013 in Germany using two different approaches to describe the hazard, the exposed objects and their vulnerability towards the hazard with varying levels of detail. One flood map is based on local flood maps computed by the European Joint Research Center not including embankments, while the other flood map was derived especially for this particular flood event. Exposed elements are mapped using the land use based data set BEAM (Basic European Asset Map) and with an object-based approach using OpenStreetMap data. The vulnerability is described by ordinary Stage-Damage-Functions and by tree-based models including additional damage-driving variables. The estimations are validated with reported damage numbers per federal state and compared to each other to quantify the influence of the different data sets at various spatial scales.

The results suggest that a stronger focus on exposed elements could improve the reliability of impact estimations considerably. The individual assessment of the influence of the different components on the overall risk points out promising next steps for further investigations.

How to cite: Sieg, T. and Thieken, A.: The Influence of Input Data on Flood Risk Estimates, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-8396, https://doi.org/10.5194/egusphere-egu21-8396, 2021.

13:41–13:43
|
EGU21-12856
|
ECS
|
Femke Mulder, Mariantonietta Morga, and Keith Jones

A large part of Europe is at risk of earthquake disaster events. In the last decade, earthquakes in Europe led to direct economic losses of approximately €29 billion as well as close to 19,000 fatalities. As a result of public awareness campaigns, business organisations are increasingly cognisant of earthquake risks. However, to date there is limited support available to help them systematically manage these risks. Our research supports decision makers based at European business organizations in their efforts to prepare for and respond to earthquakes. We look at how earthquake early warning (EEW) systems and earthquake operational forecasting (OEF) systems can best support business disaster risk management (DRM). We focus hereby on EEW and OEF based decision support systems for preparedness and rapid response, and also their integration with business continuity planning. Our article is based on an extensive literature review of the state of the art in OEF and EEW systems. We have validated and built on these insights through participatory action research (PAR) with potential business users of EEW and OEF systems in Europe. There is great variability in the ways in which different European businesses currently manage earthquake risk. Our research has given us insights into business users’ needs and expectations of EEW and OEF systems. We have harmonized and integrated these insights towards the development of a common earthquake decision support protocol for business organisations. This protocol covers business considerations that extend beyond the prevention of fatalities and direct economic loss to long-term organisational resilience. Combining insights from facilities management, organization science and engineering, we present various considerations for the development of business centric EEW and OEF decision support systems. We outline barriers to the development and uptake of such systems and describe what opportunities they present for different stakeholders.

How to cite: Mulder, F., Morga, M., and Jones, K.: Insights for Business Centric Earthquake Early Warning and Operational Forecasting Systems, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-12856, https://doi.org/10.5194/egusphere-egu21-12856, 2021.

13:43–13:45
|
EGU21-8435
|
Highlight
|
Stefano Bagli, Paolo Mazzoli, Francesca Renzi, Valerio Luzzi, Simone Persiano, Attilio Castellarin, Jaroslav Mysiak, Arthur Essenfelder, Francesca Larosa, Stefania Pasetti, Marco Folegani, Kai Schröter, Sophie Ullrich, and Luis Mediero

Floods are a global hazard that may have adverse impacts on a wide-range of social, economic, and environmental processes. Nowadays our cities are flooding with increased occurrence due to more severe weather events but also due to anthropogenic pressures like soil sealing, urban growth and, in some areas,  land subsidence. Frequency and intensity of extreme floods are expected to further increase in the future in many places due to climate change. 

The characterisation of flood events and of their multi-hazard nature is a fundamental step in order to maximise the resilience of cities to potential flood losses and damages. 

SaferPLACES employs innovative climate, hydrological and raster-based flood hazard and economic modelling techniques to assess pluvial, fluvial and coastal flood hazards and risks in urban environments under current and future climate scenarios.

SaferPLACES platform provides a cost-effective and user-friendly cloud-based solution for flood hazard and risk mapping. Moreover SaferPLACES supports multiple stakeholders in designing and assessing multiple mitigation measures such as flood barriers, water tanks, green-blue based solutions and building specific damage mitigation actions.

The intelligence behind the SaferPLACES platform integrates innovative fast DEM-based flood hazard assessment methods and Bayesian damage models, which are able to provide results in short computation times by exploiting the power of cloud computing.

A beta version of the platform is available at platform.saferplaces.co and active for four pilot cities: Rimini and Milan in Italy, Pamplona in Spain and Cologne in Germany.

SaferPLACES (saferplaces.co) is a research project founded by EIT Climate-KIC (www.climate-kic.org).

How to cite: Bagli, S., Mazzoli, P., Renzi, F., Luzzi, V., Persiano, S., Castellarin, A., Mysiak, J., Essenfelder, A., Larosa, F., Pasetti, S., Folegani, M., Schröter, K., Ullrich, S., and Mediero, L.: SaferPLACES platform: a cloud-based climate service addressing urban flooding hazard and risk., EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-8435, https://doi.org/10.5194/egusphere-egu21-8435, 2021.

13:45–13:47
|
EGU21-3042
Nadja Veigel, Heidi Kreibich, and Andrea Cominola

Human behavior has shown to have a significant impact on future flood risk. The state-of-the-art research regarding human behavior before, during and after flood events is predominantly based on site- and event-specific survey data or psychological theories. In recent years, the availability of large-scale databases provides an empirical basis for dynamical approaches to model the impacts of heterogeneous individual and societal behavioral patterns of flood risk. The US Federal Emergency Management Agency has recently released household-scale data on national flood insurance policies in-force since 2009, covering the whole US. Providing access to flood insurance is an effective strategy to increase resilience by enabling inhabitants in flood prone areas and their property to quickly recover from flood events. In this work, we analyze flood insurance purchase, considered as a proxy of flood awareness and preparedness, by data mining techniques, spatially correlating and modeling insurance ratios and socioeconomic data in official floodplains. Recent or regular exposure to flood events has shown to be another important factor influencing flood risk perception, in addition to socio-economic variables. Therefore, the effect of flood experience on flood insurance uptake is analyzed. This study ultimately contributes a data-driven approach to identify the main determinants and dynamics of flood insurance purchase throughout different states and social backgrounds. Understanding the factors driving people’s choices regarding flood insurance purchase is the first step to improve the National Flood Insurance Program's strategies and address societal inequalities in disaster risk management.

How to cite: Veigel, N., Kreibich, H., and Cominola, A.: Mining Flood Insurance Big Data to Reveal the Determinants of Humans' Flood Resilience, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-3042, https://doi.org/10.5194/egusphere-egu21-3042, 2021.

Social media analysis, web data and crowdsourcing
13:47–13:49
|
EGU21-8621
|
Highlight
|
|
Jens Kersten, Malin Kopitzsch, Jan Bongard, and Friederike Klan

Gathering, analyzing and disseminating up-to-date information related to incidents and disasters is key to disaster management and relief. Satellite imagery, geo-information, and in-situ data are the mainly used information sources to support decision making. However, limitations in data timeliness as well as in spatial and temporal resolution lead to systematic information gaps in current well-established satellite-based workflows. Citizen observations spread through social media channels, like Twitter, as well as freely available webdata, like WikiData or the GDELT database, are promising complementary sources of relevant information that might be utilized to fill these information gaps and to support in-situ data acquisition. Practical examples for this are impact assessments based on social media eyewitness reports, and the utilization of this information for the early tasking of satellite or drone-based image acquisitions.

The great potential, for instance of social media data analysis in crisis response, was investigated and demonstrated in various related research works. However, the barriers of utilizing webdata and appropriate information extraction methods for decision support in real-world scenarios are still high, for instance due to information overload, varying surrounding conditions, or issues related to limited field work infrastructures, trustworthiness, and legal aspects.

Within the current DLR research project "Data4Human", demand driven data services for humanitarian aid are developed. Among others, one project goal is to investigate the practical benefit of augmenting existing workflows of the involved partners (German Red Cross, World Food Programme, and Humanitarian Open Street Map) with social media (Twitter) and real-time global event database (GDELT) data. In this contribution, the general concepts, ideas and corresponding methods for webdata analysis are presented. State-of-the-art deep learning models are utilized to filter, classify and cluster the data to automatically identify potentially crisis-related data, to assess impacts, and to summarize and characterize the course of events, respectively. We present first practical findings and analysis results for the 2019 cyclones Idai and Kenneth.

How to cite: Kersten, J., Kopitzsch, M., Bongard, J., and Klan, F.: Combining Remote Sensing with Webdata and Machine Learning to Support Humanitarian Relief Work, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-8621, https://doi.org/10.5194/egusphere-egu21-8621, 2021.

13:49–13:51
|
EGU21-3606
|
Young-Hee Ryu and Seung-Ki Min

Severe air pollution is hazardous to human health and long-term exposure to air pollution degrades not only human health but also the quality of life. In the recent years, public concern and awareness of air quality have been greatly raised in South Korea, and this is somewhat contradictory to the level of particulate matter with diameters less than 10 μm (PM10). The observed PM10 levels cannot explain the elevated levels of public concern specifically after 2013–2014 because the average PM10 was much higher in the past (prior to 2013) and shows a decreasing tendency during the recent decades over South Korea. This study utilizes big data from internet search engines (internet search volume data from Google and NAVER) to understand how people perceive air quality differently from the level of observed PM10 and what influences public perception of air quality. An index, air quality perception index (AQPI), is newly proposed in this study and it is assumed that the internet search volume data with a keyword of “air quality” are representative of this index. An empirical model that simulates AQPI is developed by employing the decay theory of forgetting and is trained by PM10, visibility, and internet search volume data. The results show that the memory decay exponent and the accumulation of past memory traces, which represent the weighted sum of past perceived air quality, play key roles in explaining the public's perception of air quality. A severe haze event with an extremely long duration that occurred in the year 2013–2014 is found to trigger the increase in public awareness of air quality, acting as a turning point. Before the turning point, AQPI is more influenced by sensory information (visibility) due to the low awareness level, but after the turning point it is more influenced by PM10 and people slowly forget about air quality. The retrospective AQPI analysis assuming a low level of awareness confirms that perceived air quality is indeed worst in the year 2013–2014. In other words, the high level of awareness after experiencing the record-long severe haze event in 2013–2014 makes people remember longer and more sensitive to the level of pollutants, thus explaining the increased public concerns in the recent years. Our results suggest the promising potential of social data for a better understanding of public perception and awareness of other natural and/or man-made hazards.

How to cite: Ryu, Y.-H. and Min, S.-K.: Innovative utilization of internet search volume data to understand public awareness and perception of air quality, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-3606, https://doi.org/10.5194/egusphere-egu21-3606, 2021.

13:51–13:53
|
EGU21-8637
|
ECS
|
Jan Bongard and Jens Kersten

The Twitter Stream API offers the possibility to develop (near) real-time methods and applications to detect and monitor impacts of crisis events and their changes over time. As demonstrated by various related research, the content of individual tweets or even entire thematic trends can be utilized to support disaster management, fill information gaps and augment results of satellite-based workflows as well as to extend and improve disaster management databases. Considering the sheer volume of incoming tweets, it is necessary to automatically identify the small number of crisis-relevant tweets and present them in a manageable way.

Current approaches for identifying crisis-related content focus on the use of supervised models that decide on the relevance of each tweet individually. Although supervised models can efficiently process the high number of incoming tweets, they have to be extensively pre-trained. Furthermore, the models do not capture the history of already processed messages. During a crisis, various and unique sub-events can occur that are likely to be not covered by the respective supervised model and its training data. Unsupervised learning offers both, to take into account tweets from the past, and a higher adaptive capability, which in turn allows a customization to the specific needs of different disasters. From a practical point of view, drawbacks of unsupervised methods are the higher computational costs and the potential need of user interaction for result interpretation.

In order to enhance the limited generalization capabilities of pre-trained models as well as to speed up and guide unsupervised learning, we propose a combination of both concepts. A successive clustering of incoming tweets allows to semantically aggregate the stream data, whereas pre-trained models allow to identify potentially crisis-relevant clusters. Besides the identification of potentially crisis-related content based on semantically aggregated clusters, this approach offers a sound foundation for visualizations, and further related tasks, like event detection as well as the extraction of detailed information about the temporal or spatial development of events.

Our work focuses on analyzing the entire freely available Twitter stream by combining an interval-based semantic clustering with an supervised machine learning model for identifying crisis-related messages. The stream is divided into intervals, e.g. of one hour, and each tweet is projected into a numerical vector by using state-of-the-art sentence embeddings. The embeddings are then grouped by a parametric Chinese Restaurant Process clustering. At the end of each interval, a pre-trained feed-forward neural network decides whether a cluster contains crisis-related tweets. With a further developed concept of cluster chains and central centroids, crisis-related clusters of different intervals can be linked in a topic- and even subtopic-related manner.

Initial results show that the hybrid approach can significantly improve the results of pre-trained supervised methods. This is especially true for categories in which the supervised model could not be sufficiently pre-trained due to missing labels. In addition, the semantic clustering of tweets offers a flexible and customizable procedure, resulting in a practical summary of topic-specific stream content.

How to cite: Bongard, J. and Kersten, J.: Combining Supervised and Unsupervised Learning to Detect and Semantically Aggregate Crisis-Related Twitter Content, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-8637, https://doi.org/10.5194/egusphere-egu21-8637, 2021.

13:53–13:55
|
EGU21-15618
Christian Arnhardt, Ramesh Guntha, Gargi Singh, Vish K.R Viswanathan, Praful Rao, Gokul Halan, Maneesha Vinodini Ramesh, and Bruce Malamud

This research describes the development and pro-forma of a landslide tracker methodology (structure of questions and photos) to support local reporting of landslides in India, thus enhancing modelling of susceptibility to future landslides and India Landslide Early Warning Systems. This methodology aids in the collection of timely and representative information about landslide events using local people before such information is lost due to human clearance works or natural processes (further erosion, vegetation cover). In the framework of the UK NERC/FCDO funded LANDSLIP project ‘Landslide multi-hazard risk assessment, preparedness and early warning in South Asia', a collaboration of government, academic and NGO scientists/practitioners from India and UK co-designed a questionnaire in both paper and mobile app proforma called the ‘Landslide Tracker’. The Landslide Tracker was developed as a tool for gathering landslide information from different levels of local users (e.g., local officials, NGOs, students) to enhance landslide inventories in the test sites of Darjeeling and Nilgiris, India. Different users, supporting data capture within the project, have different levels of understanding and knowledge about landslides. The Tracker was developed with three user levels to reflect this variation in landslide expertise. Level 1 is available in paper format and Levels 1 to 3 in a freely available Google Play app developed by Amrita University “Landslide Tracker”. Level 1 of the landslide tracker represents all users where the expertise level is not known or assumed to be limited; this comprises the most basic landslide information. This group of non-specialists represents the majority group of people capturing data within each study area. Information submitted by this user group, due to the limited knowledge and understanding of landslides in a geological context, might be assumed to have the highest degree of uncertainty and potentially the greatest amount of false information. The questions for this group utilise a simplified lexicon, with (i) location, data and time, (i) pictures of landslide material, (iii) landslide type, with finally (iv) generalised impact information. Level 2 represents more specialist users with a higher advanced understanding of landslides either from their background training/proficiency or users that have undergone training. In general, these people are asked the same questions as in Level 1, but a more technical vocabulary is used, and more detailed information is requested, like the size of landslides. Level 3 is for trained landslide experts. They are asked a wide range of landslide questions, reflecting internationally recognised landslide glossaries and definitions, and based on the current methodology used by the Geological Survey of India. With the help of two NGOs (Keystone and Save the Hills) and the Geological Survey of India, the developed proforma (paper and mobile app), have undergone field testing. Feedback from this phase of development was essential for the improvement and update of the pro-forma.  Efforts during the most recent Monsoon by the partners has resulted in over 500 landslide records being collected in the two test sites by either the app or in paper format.

How to cite: Arnhardt, C., Guntha, R., Singh, G., K.R Viswanathan, V., Rao, P., Halan, G., Ramesh, M. V., and Malamud, B.: A Landslide Tracker Methodology to Support Local Reporting of Landslides in India, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-15618, https://doi.org/10.5194/egusphere-egu21-15618, 2021.

Risk financing and index insurance
13:55–13:57
|
EGU21-12549
|
ECS
Mariette Vreugdenhil, Isabella Pfeil, Luca Brocca, Stefania Camici, Markus Enenkel, and Wolfgang Wagner

Accurate and reliable early warning systems can support anticipatory disaster risk financing which can be more cost effective than post-disaster emergency response. One of the challenges in anticipatory disaster risk financing is basis risk, as a result of data and model uncertainty. The increasing availability of Earth Observation (EO) data provides the opportunity to develop shadow models or include different variables in early warning systems and weather index insurance. Especially of interest is the early indication of climate impacts on agricultural production. Traditionally, crop and yield prediction models use meteorological data such as precipitation and temperature, or optical based indicators such as Normalized Difference Vegetation Index (NDVI), for yield prediction.  In recent years, soil moisture has gained popularity for yield prediction as it controls the water availability for plants.  

Here, we will present the use of different satellite-based rainfall and soil moisture products, in combination with NDVI, to develop a yield deficiency indicator over two water limited regions. An analysis for Senegal and Morocco is performed at the national level using yield data of four major crops from the Food and Agriculture Organization of the United Nations. Freely available EO datasets for rainfall, soil moisture, root zone soil moisture and NDVI were used. All datasets were spatially resampled to a 0.1° grid, temporally aggregated to monthly anomalies and finally detrended and standardized. First, regression analysis with yearly yield was performed per EO dataset for single months. For this, EO datasets where aggregated over areas where the specific crop was grown. Secondly, based on these results multiple linear regression was performed using the months and variables with the highest explanatory power. The multiple linear regression was used to provide spatially varying yield predictions by trading time for space. The spatial predictions were validated using sub-national yield data from Senegal.  

The analysis demonstrates the added-value of satellite soil moisture for early yield prediction. Both in Senegal and Morocco rainfall and soil moisture showed a high predictive skill early in the growing season: negative early season soil moisture anomalies often lead to low yield. NDVI showed more predictive power later in the growing season. For example, in Morocco soil moisture at the start of the season can already explain 56% of the variability in yield. NDVI can explain 80% of the yield, however this is at the end of the growing season. Combining anomalies of the optimal months based on the different variables in multiple linear regression improved yield prediction. Again, including NDVI led to higher predictive power, at the cost of early warning.  This analysis shows very clearly that soil moisture can be a valuable tool for anticipatory drought risk financing and early warning systems. 

How to cite: Vreugdenhil, M., Pfeil, I., Brocca, L., Camici, S., Enenkel, M., and Wagner, W.: Satellite soil moisture for yield prediction in water limited regions, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-12549, https://doi.org/10.5194/egusphere-egu21-12549, 2021.

13:57–13:59
|
EGU21-9534
|
ECS
|
Mehdi H. Afshar, Timothy Foster, Thomas P. Higginbottom, Ben Parkes, Koen Hufkens, Sanjay Mansabdar, Francisco Ceballos, and Berber Kramer

Extreme weather causes substantial damage to livelihoods of smallholder farmers globally and are projected to become more frequent in the coming decades as a result of climate change. Index insurance can theoretically help farmers to adapt and mitigate the risks posed by extreme weather events, providing a financial safety net in the event of crop damage or harvest failure. However, uptake of index insurance in practice has lagged far behind expectations. A key reason is that many existing index insurance products suffer from high levels of basis risk, where insurance payouts correlate poorly with actual crop losses due to deficiencies in the underlying index relationship, contract structure or data used to trigger insurance payouts to farmers. 

In this study, we analyse to what extent the use of crop simulation models and crop phenology monitoring from satellite remote sensing can reduce basis risk in index insurance. Our approach uses a calibrated biophysical process-based crop model (APSIM) to generate a large synthetic crop yield training dataset in order to overcome lack of detailed in-situ observational yield datasets – a common limitation and source of uncertainty in traditional index insurance product design. We use this synthetic yield dataset to train a simple statistical model of crop yields as a function of meteorological and crop growth conditions that can be quantified using open-access earth observation imagery, radiative transfer models, and gridded weather products. Our approach thus provides a scalable tool for yield estimation in smallholder environments, which leverages multiple complementary sources of data that to date have largely been used in isolation in the design and implementation of index insurance

We apply our yield estimation framework to a case study of rice production in Odisha state in eastern India, an area where agriculture is exposed to significant production risks from monsoonal rainfall variability. Our results demonstrate that yield estimation accuracy improves when using meteorological and crop growth data in combination as predictors, and when accounting for the timing of critical crop development stages using satellite phenological monitoring. Validating against observed yield data from crop cutting experiments, our framework is able to explain around 54% of the variance in rice yields at the village cluster (Gram Panchayat) level that is the key spatial unit for area-yield index insurance products covering millions of smallholder farmers in India. Crucially, our modelling approach significantly outperforms vegetation index-based models that were trained directly on the observed yield data, highlighting the added value obtained from use of crop simulation models in combination with other data sources commonly used in index design.

How to cite: H. Afshar, M., Foster, T., Higginbottom, T. P., Parkes, B., Hufkens, K., Mansabdar, S., Ceballos, F., and Kramer, B.: Overcoming basis risk in agricultural index insurance using crop simulation modeling and satellite crop phenology, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-9534, https://doi.org/10.5194/egusphere-egu21-9534, 2021.

13:59–14:01
|
EGU21-12930
|
ECS
|
Luigi Cesarini, Rui Figueiredo, Beatrice Monteleone, and Mario Martina

A steady increase in the frequency and severity of extreme climate events has been observed in recent years, causing losses amounting to billions of dollars. Floods and droughts are responsible for almost half of those losses, severely affecting people’s livelihoods in the form of damaged property, goods and even loss of life. Weather index insurance is an innovative tool in risk transfer for disasters induced by natural hazards. In this type of insurance, payouts are triggered when an index calculated from one or multiple environmental variables exceeds a predefined threshold. Thus, contrary to traditional insurance, it does not require costly and time-consuming post-event loss assessments. Its ease of application makes it an ideal solution for developing countries, where fast payouts in light of a catastrophic event would guarantee the survival of an economic sector, for example, providing the monetary resources necessary for farmers to sustain a prolonged period of extreme temperatures. The main obstacle to a wider application of this type of insurance mechanism stems from the so-called basis risk, which arises when a loss event takes place but a payout is not issued, or vice-versa.

This study proposes and tests the application of machine learning algorithms for the identification of extreme flood and drought events in the context of weather index insurance, with the aim of reducing basis risk. Neural networks and support vector machines, widely adopted for classification problems, are employed exploring thousands of possible configurations based on the combination of different model parameters. The models were developed and tested in the Dominican Republic context, leveraging datasets from multiple sources with low latency, covering a time period between 2000 and 2019. Using rainfall (GSMaP, CMORPH, CHIRPS, CCS, PERSIANN and IMERG) and soil moisture (ERA5) data, the machine learning algorithms provided a strong improvement when compared to logistic regression models, used as a baseline for both hazards. Furthermore, increasing the number of information provided during model training proved to be beneficial to the performances, improving their classification accuracy and confirming the ability of these algorithms to exploit big data. Results highlight the potential of machine learning for application within index insurance products.

How to cite: Cesarini, L., Figueiredo, R., Monteleone, B., and Martina, M.: Near real-time identification of extreme events for weather index insurance using machine learning algorithms, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-12930, https://doi.org/10.5194/egusphere-egu21-12930, 2021.

14:01–14:03
|
EGU21-10953
|
ECS
Michaela Seewald, Ralf Ryter, Roel Van Hoolst, Laurent Tits, Ian Shynkarenko, and Roman Shynkarenko

Agriculture provides essential social benefits: supply of food and commodities, economic development and employment. However, agriculture is under growing pressure, arising from soil degradation, water scarcity, natural hazards and weather extremes due to changes in climate patterns. Agricultural insurance is gaining an increasing role as a risk management tool. Given this, the insurance sector has a significant emphasis on identifying, gathering and aggregating historical and current regional and localised data, which could be sourced from remote sensing and earth observation (EO) datasets.

To find out more about the needs and challenges of the agro-insurance’s sector and how these might be addressed with current and future EO capabilities, the ESA Earth Observation Best Practice for Agro-Insurance (EO4I) project brings together the EO and agro-insurance sector. The latter is represented by a champion user group comprised of primary insurers as well as reinsurers. The very close and regular contact with those champion users is an outstanding characteristic of this project.

An analysis of the potential customers’ requirements revealed a list of more than 60 challenges and needs of the sector, such as the assistance in damage assessments, identification of potential risk effects, estimation of affected area and the extent of damage, or monitoring the crop development throughout the season. These challenges were translated into geo-information requirements for a better analysis of currently available EO capabilities. As could be seen so far, business processes of insurance industry can be supported by numerous remote sensing products and services.

Nevertheless, there is a major gap between the perceived potential and the actual application of available EO capabilities by the agro-insurance sector. The bottleneck is the lack of awareness, understanding and trust in the EO products and services for the agro-insurance sector on both sides, the insurers and their customers. The remote sensing community also often focusses on the possibilities and appropriateness of certain techniques, without considering the impact on the customer value, the productivity and profitability of the industry.

Therefore, EO methods, products and services need to be adaptable to the agro-insurance’s business needs and fit into their daily workflows. The project now builds on the results of this initial requirements analysis to connect EO products with insurance solutions to go from best practice to practice. To demonstrate the potential of EO and cutting-edge technology for the agricultural insurance sector, customised use cases to support loss assessment and monitoring based on artificial intelligence will be developed for selected areas and tested with the available in-situ data.

How to cite: Seewald, M., Ryter, R., Van Hoolst, R., Tits, L., Shynkarenko, I., and Shynkarenko, R.: Demonstrating the Potential of EO for the Agro-Insurance Sector, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-10953, https://doi.org/10.5194/egusphere-egu21-10953, 2021.

14:03–14:05
|
EGU21-7613
|
ECS
|
Markus Enenkel, Daniel Osgood, and Rahel Diro

Several drought risk financing projects have been developed to strengthen the disaster resilience of the world’s vulnerable communities, countries and regions. Satellite-derived information plays a vital role to characterize historical and current drought impacts. Various independent earth observation datasets can be used to cross-validate each other, strengthening the disaster narrative and reduce basis risk. However, satellite data require additional socioeconomic information, which often shows critical gaps, to close the gap between hazards, vulnerabilities and impacts. While satellite-derived information is considered to be objective there are various projects with payout trigger mechanisms that rely on subjective assessments, for instance expressed as a declaration of emergency. The next generation of risk financing solutions for extreme weather and climate events will have to merge these two perspectives. The World Bank’s Next Generation Drought Index (NGDI) project might be the first attempt to link a convergence of evidence approach applied to satellite-derived insurance triggers with a guided integration of local expertise. The project aims to 1) avoid the perception of more complex technical methods as analytical black boxes 2) benchmark different datasets, model outputs and index parameters, and 3) lower the entry barrier for novel risk financing solutions by establishing local risk ownership. This study focuses on the first results of the NGDI project for Senegal. 

How to cite: Enenkel, M., Osgood, D., and Diro, R.: The importance of co-design in satellite-derived drought risk financing, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-7613, https://doi.org/10.5194/egusphere-egu21-7613, 2021.

14:05–14:07
|
EGU21-3996
Clément Michoud, Jean-Philippe Malet, Dalia Kirschbaum, Thierry Oppikofer, Robert Emberson, Fabrizio Pacini, Pascal Horton, Anne Puissant, Paolo Mazzanti, Mélanie Pateau, Abder Oulidi, Abderrahim Chaffai, and Lahsen Ait Brahim

The frequency and impact of disasters rise at the global scale, calling for effective disaster risk management and innovative risk financing solutions. Disaster Risk Financing (DRF) can increase the ability of national and local governments, homeowners, businesses, agricultural producers, and low-income populations to respond more quickly and resiliently to disasters by strengthening public financial management and promoting market-based disaster risk financing. For landslide events, the usage of DRF products is not yet extensive, mainly due to challenges in capturing the appropriate destabilization factors and triggers, as well as forecasting the physical properties of a landslide event (such as its type, location, size, number of people affected, and/or exposed infrastructure). The availability and quality of satellite EO derived data on rainfall that triggers landslides (Global Precipitation Measurement mission / GPM) and observations of the landslides themselves (Copernicus Sentinel radar and multispectral sensors, very high resolution -VHR- optical sensors) greatly improved in recent years. In the same time, effective models are refined and support near-real time landslide hazard assessment (e.g. Landslide Hazard Assessment for Situational Awareness / LHASA; Flow path assessment of gravitational hazards at a Regional scale / FLOW-R).

The objective of this work is to present the prototype platform LANDSLIDE HAZARD INFORMATION SYSTEM (LHIS) which aims to support landslide DRF priorities using Earth Observation data and models. The functions of the platform are to be able to anticipate, forecast and respond to incipient landslide events (in Near-Real Time, NRT) by providing estimates of parameters suitable for parametric insurance calculations, including landslide inventories, susceptibility and hazard maps, potential damages and costs analyses. The LHIS prototype is accessible on the GEP / Geohazards Exploitation Platform allowing easy access, processing and visualization of EO-derived products. The prototype consists of three modular components with respectively: 1) a Landslide Detection component to create Landslide Inventories, 2) a Landslide Hazard Assessment component using global and national geospatial datasets leading to Landslide Susceptibility Maps, Scenario-based Hazard Maps and NRT Rainfall-based Hazard Maps, and 3) Landslide Impact Assessment component combining landslide hazard maps with population and infrastructure datasets to derive Landslide Exposure Maps and Landslide Impact Index. The landslide detection module is based on the analysis of time series of optical and SAR data; the landslide hazard and impact assessment modules are based on the LHASA, FLOW-R and PDI models.

The information system is being developed and tested in Morocco in collaboration with the solidarity fund against catastrophic events (FSEC) and the World Bank for two contrasting use cases in the Rif area (North Morocco) and the Safi area (Central Morocco) exposed to various landslide situations occurring in different environmental and climatic contexts.

How to cite: Michoud, C., Malet, J.-P., Kirschbaum, D., Oppikofer, T., Emberson, R., Pacini, F., Horton, P., Puissant, A., Mazzanti, P., Pateau, M., Oulidi, A., Chaffai, A., and Ait Brahim, L.: Landslide Hazard Information System for Landslide Disaster Risk Financing: Earth Observation and Modelling Products for Near-Real-Time Assessment, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-3996, https://doi.org/10.5194/egusphere-egu21-3996, 2021.

14:07–15:00