EGU2020-10037
https://doi.org/10.5194/egusphere-egu2020-10037
EGU General Assembly 2020
© Author(s) 2020. This work is distributed under
the Creative Commons Attribution 4.0 License.

Improving the robustness of flood catastrophe models in insurance through academia-industry collaboration

Valentina Noacco1,2, Francesca Pianosi1,2, Thorsten Wagener1,2, Kirsty Styles3, and Stephen Hutchings3
Valentina Noacco et al.
  • 1Bristol University, Water and Environment Research group, Civil Engineering, Bristol, United Kingdom of Great Britain and Northern Ireland (valentina.noacco@bristol.ac.uk)
  • 2Cabot Institute, University of Bristol, Bristol, BS8 1UJ, UK
  • 3JBA Risk Management, 1 Broughton Park Old Lane North, Broughton, Skipton, BD23 3FD, UK

To quantify risk from natural hazards and ensure a robust decision-making process in the insurance industry, uncertainties in the mathematical models that underpin decisions need to be efficiently and robustly captured. The complexity and sheer scale of the mathematical modelling often makes a comprehensive, transparent and easily communicable understanding of the uncertainties very difficult.  Models predicting flood hazard and risk have shown high levels of uncertainty in their predictions due to data limitations and model structural uncertainty. Moreover, uncertainties are estimated to increase with climate change, especially for higher warming levels.

Global Sensitivity Analysis (GSA) provides a structured approach to quantify and compare the relative importance of parameter, data and structural uncertainty. GSA has been implemented successfully in tools such as the Sensitivity Analysis For Everybody (SAFE) toolbox, which is currently used by more than 2000 researchers worldwide. However, tailored tools, workflows and case studies are needed to demonstrate GSA benefits to practitioners and accelerate its uptake by the insurance industry.

One such case study has been the collaboration between the University of Bristol and JBA Risk Management on JBA’s new Global Flood Model, whose technology and flexibility has allowed to test a catastrophe model in ways not possible in the past. JBA has gained great insight into the sensitivity of modelled losses to uncertainties in the model datasets and analysis options. This has helped to explore the key sensitivities of the results to the assumptions made, for example to visualise how the distribution of modelled losses varies by return period and explore which parameters have the biggest impact on loss for the part of the Exceedance-Probability curve of interest. This information is essential for insurance companies to form their view of risk and to empower model users to adequately communicate uncertainties to decision-makers.

How to cite: Noacco, V., Pianosi, F., Wagener, T., Styles, K., and Hutchings, S.: Improving the robustness of flood catastrophe models in insurance through academia-industry collaboration , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10037, https://doi.org/10.5194/egusphere-egu2020-10037, 2020

Displays

Display file

Comments on the display

AC: Author Comment | CC: Community Comment | Report abuse

displays version 1 – uploaded on 13 Apr 2020
  • CC1: Comment on EGU2020-10037, Oliver Wing, 07 May 2020

    Hi Valentina,

    Got only a little chance to chat in the session. I think this is a really important piece of work to try and get under the skin of often-blackbox CAT models. I've got just a couple of further questions, but perhaps when the current craziness blows over we could chat in person up at the uni :)

    1) Could you expand on what buffer size is? I'm not familiar with this parameter

    2) Specifically, how do you quantify suitable uncertainty bounds for vulnerability? We did some work in this earlier this year (doi:10.1038/s41467-020-15264-2) and, I gotta say, I'm surprised at how narrow the distribution of AALs are in light of this.

    3) Following on from questions in the chat: What plans do you have to extend this to other parameters and inputs? I know that quantifying the sensitivity bounds for the different parameters is subjective & tricky, but I can't see much about the sensitivity of the physical flood model. Especially for a global-scale model, I'd expect those loss uncertainty bounds to widen considerably when we think about DEM errors, discharge estimation inaccuracies, river channel schematization, etc. V hard to analyse of course, but v important!

    Thanks again,

    Ollie

    • AC1: Reply to CC1, Valentina Noacco, 07 May 2020

      Hi Ollie,

      Thanks! Yes, this work in collaboration with JBA has been very interesting and allowed us to explore cat models in ways not usually possible with traditional cat models. Happy to chat in person when back at uni, but in the meantime a few answers below:

      1) The buffer size is the radius of a circular buffer from which a flood depth is extracted from the flood map for damage calculations.

      2) For vulnerability curves, given that they are discrete inputs we had to decide a set of realistic vulnerability curves as our set of viable alternatives. Therefore, we varied both the damage ratio (higher damage ratios --> higher losses) and the number of steps into which the vulnerability curves are discretised (higher n. of steps --> more refined curves). Our variation of the vulnerability curves brought an increase of the AAL up to 33%.

      3) Yes, as you spotted, here we did not consider the full range of uncertainties that contributes to the uncertainty of the losses, as we have not considered any input from the hazard component for example. As I mentioned in the frantic live chat this morning, this was a first case study, I suspect they may want to explore further inputs in the future, but that’s one for JBA.

      Cheers,

      Valentina

      • CC2: Reply to AC1, Oliver Wing, 07 May 2020

        Thanks for the response Valentina – very interesting. Let's talk again soon.

        Cheers,

        Ollie