AutoVal: A framework for scientific validation of flood catastrophe models
- JBA Risk Management Limited, Skipton, United Kingdom (ashleigh.massam@jbarisk.com)
Catastrophe models are complex numerical models that simulate extreme events to estimate the economic cost of natural disasters, usually developed by model providers and adopted by clients in the (re)insurance, finance, and other sectors. Before adopting a model, the model user typically engages in an evaluation process that can be a challenging and resource-intensive task, and challenging for non-experts. This process is not standardised across the industry, so model users need to establish their approach from scratch. Yet effective evaluation is repetitive and takes time and resources – an effort that is being duplicated across organisations and teams, which would be better placed on an exploratory side of testing that progresses knowledge and understanding of models. By automating the repetitive part of model evaluation, data visualisations and results can be reproduced quickly. This would allow for more frequent assessment, leading to an improved understanding of the limitations of catastrophe models and increased confidence in the insights gained from using a catastrophe model.
We present AutoVal, a catastrophe model evaluation tool that automates standard loss validation tests. The premise of AutoVal is very simple: third party data is transformed at the point of use into benchmark expectations, for assessment against catastrophe model outputs. In our work to date, third party insurance claims data or published estimates of event losses are used to calculate average annual losses, loss exceedance curves, and map spatial and temporal distributions of loss for comparison with modelled estimates. Further work is ongoing to expand the capability of AutoVal to evaluate components within the natural catastrophe model, including vulnerability functions and hazard maps.
AutoVal can aid model users from both industry and academic backgrounds through the application of standard, repeatable tests. It assists with the effective review of model configurations across a range of perils or scenarios, allowing decision makers to better understand model sensitivity and behaviour. AutoVal also has the potential to remove the repetition in model evaluation, allowing for more frequent model evaluation and faster feedback loops between developers and users. In this presentation, we will share our progress so far with designing and automating the evaluation of our catastrophe models, including – but not limited to – standardised schemas for benchmark data, validation exercises to assess modelled estimates of loss, and new approaches to interpreting model sensitivity.
To realise the potential of AutoVal, we invite colleagues from the risk management sector to discuss our ongoing work towards establishing a set of benchmark tests that can complement industry-wide progress towards consistent and common standards.
How to cite: Massam, A., Burns, D., Jordan, O., Nix, B., O'Malley, N., Oldham, P., Sahu, B., and Vasiljeva, K.: AutoVal: A framework for scientific validation of flood catastrophe models, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-15783, https://doi.org/10.5194/egusphere-egu24-15783, 2024.