EGU General Assembly 2020
© Author(s) 2020. This work is distributed under
the Creative Commons Attribution 4.0 License.

Assessing the global risk of climate change to re/insurers using catastrophe models and hazard maps

Sarah Jones, Emma Raven, and Jane Toothill
Sarah Jones et al.
  • JBA Risk Management, Skipton, United Kingdom (

In 2018 worldwide natural catastrophe losses were estimated at around USD $155 billion, resulting in the fourth-highest insurance payout on sigma records, and in 2020 JBA Risk Management (JBA) estimate 2 billion people will be at risk to inland flooding. By 2100, under a 1.5°C warming scenario, the cost of coastal flooding alone as a result of sea level rise could reach USD $10.2 trillion per year, assuming no further adaptation. It is therefore imperative to understand the impact climate change may have on global flood risk and insured losses in the future.

The re/insurance industry has an important role to play in providing financial resilience in a changing climate. Although integrating climate science into financial business remains in its infancy, modelling companies like JBA are increasingly developing new data and services to help assess the potential impact of climate change on insurance exposure.

We will discuss several approaches to incorporating climate change projections with flood risk data using examples from research collaborations and commercial projects. Our case studies will include: (1) building a national-scale climate change flood model through the application of projected changes in river flow, rainfall and sea level to the stochastic event set in the model, and (2) using Global Climate Model data to adjust hydrological inputs driving 2D hydraulic models to develop climate change flood hazard maps.

These tools provide outputs to meet different needs, and results may sometimes invoke further questions. For example: how can an extreme climate scenario produce lower flood risk than a conservative one? Why may adjacent postcodes' flood risk differ? We will explore the challenges associated with interpreting these results and the potential implications for the re/insurance industry.

How to cite: Jones, S., Raven, E., and Toothill, J.: Assessing the global risk of climate change to re/insurers using catastrophe models and hazard maps, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5323,, 2020


Display file

Comments on the display

AC: Author Comment | CC: Community Comment | Report abuse

displays version 1 – uploaded on 01 May 2020
  • CC1: Comment on EGU2020-5323, Hamish Steptoe, 07 May 2020

    Hi Sarah - (1) Given that uncertainty is inherent in climate predicitions, how do you intend to communicate uncertainty? and (2) how do you think the magnitude of the uncertaity compares to the size of the signal? Is there a risk that he uncertainty range becomes so big that it makes the signal useless? What implications does this have on communicating the risk (as per (1))?

    • AC1: Reply to Hamish: uncertainty questions, Sarah Jones, 07 May 2020

      Hi Hamish, good questions!

      (1) We can investigate the uncertainty, but it isn’t inherently communicated in the results, aside from providing (as standard) the standard deviation associated with loss output, with the model, for example. I think uncertainty is a huge challenge in cat modelling to begin with, never mind adding in climate data! But Valentina et al.’s work on SAFE is helping to quantify uncertainty in the models.

      (2) Without investigating this in more detail it’s difficult to say! If the uncertainty range were to exceed the benefits associated with providing losses/hazard metrics that consider climate change then it could make communication of the risk exceptionally tricky. However, within climate modelling the uncertainty is vast, and we could take a leaf out of the IPCC book and perhaps communicate the likelihood of occurrence in similar language (‘likely’, ‘almost certain’ etc). This would also help align the re/insurance industry and climate science community.

      Hope this helps but let me know if you've any further questions, either here or via my email:

  • CC2: Comment on EGU2020-5323, Oliver Wing, 07 May 2020

    Hi Sarah,

    Session was a bit of a rush but a lot of fun! Wondering if you have any thoughts on my query: about how much of the modelled climate-changed losses up to 2040 we might already be seeing now (depending on if model boundary conditions were or weren't updated to 2020 climatic changes).

    Ollie Wing, Bristol (11:09) @Sarah: nice work, as expected! I'm curious about how you quantify the changes with respect to the present day. As you point out, we are already experiencing climate-affected flood losses. Does your baseline model reflect this: i.e., is it based solely on historical climate or is it 'climate-corrected' to 2020? If the former, how much of the change you see up to 2040 do you think we are already seeing in 2020?

    Thanks a lot,


    • AC2: Reply to Ollie: historical data & climate change modelling, Sarah Jones, 07 May 2020

      Hi Ollie,

      Yes, it's a shame it was a bit rushed and that we ran out of time! 

      Your question is a good one, and relevant to discussions in the industry at the moment. Our baseline models (hazard maps and catastrophe models) are developed using input data from observed gauges. We haven’t removed the anthropogenic climate signal from this data (but there is some research around this, see Schaller et al., 2016 and Kay et al., 2019 for some examples), and it’s certainly something we’re interested in (and would go towards answering your final question, which is a hot topic in the industry!).

      Hope this helps but let me know if you've any further questions, either here or via my email:

      • CC3: Reply to AC2, Oliver Wing, 07 May 2020

        Certainly interesting thinking about how (& how we deal with) history being a poor guide to current conditions – touching on stuff Richard Dixon often talks about. Thanks for the response, Sarah!



  • AC3: Response to Hattermann's question, Sarah Jones, 07 May 2020

    Hattermann@PIK(convener) (11:08) @Sarah: very nice work indeed. How much do you trust regional differences in trends. Did you apply climate model ensemble data? As I understand it you varied the input to the hydraulic model based on GCM input, but without a catchment model in between?

    Thanks! I think we need to remember the uncertainty around climate data and cat modelling. There are ways we can look to see how trustworthy regional trends are (do they follow typical historical patterns?), but we haven’t assigned a confidence to it, per se.

    For the catastrophe model, we applied derived climate data from UKCP18 and the UK Climate Change Risk Assessment 2017, whereas for our hazard mapping approach, risQ used GCM data from CMIP5. We used the temperature and precipitation projections as input to our calibrated rainfall-runoff models, giving us estimated river flows. These river flow hydrographs are then run in our 2D hydraulic model.

  • AC4: Response to Tracy Irvine's question, Sarah Jones, 07 May 2020

    Tracy Irvine, Oasis Hub (11:07) Hi Sarah, as an external facing company, how are your customers responding to the climate change model as opposed to the historical model.

    We’ve experienced a shift from only considering the annual renewal cycle to climate risk being considered a big challenge in the coming years for the industry. Historical models are, of course, still important, not least because they can help us unlock the impact climate change may have already had on events. So far, we’ve received a positive response to our work!