Case studies in bias adjustment: addressing potential pitfalls through model comparison and evaluation using a new open-source python package
- 1University of Reading, Meteorology, Reading, United Kingdom of Great Britain – Northern Ireland (f.r.spuler@pgr.reading.ac.uk)
- 2University of Exeter, Mathematics and Statistics, Exeter, United Kingdom of Great Britain - Northern Ireland
- 3European Centre for Medium-Range Weather Forecasts
Statistical bias adjustment is now common practice when using climate models for impact studies, prior to or in conjunction with downscaling methods. Examples of widely used methodologies include CDFt (Vrac et al. 2016), ISIMIP3BASD (Lange 2019) or equidistant CDF matching (Li et al. 2010). Though common practice, recent papers (Maraun et al. 2017) have found fundamental issues with statistical bias adjustment. When multivariate aspects are not evaluated, improper use of bias adjustment is not detected. Fundamental misspecifications of the climate model, such as the displacement of large-scale circulation, cannot be corrected. Furthermore, results are sensitive to internal climate variability over the reference period (Bonnet et al 2022). If applied, bias adjustment methods should therefore be evaluated carefully in multivariate aspects and targeted to the use-case at hand.
However, good practice in the evaluation and application of bias adjustment methods is inhibited by what we frame as practical issues. If at all, published bias adjustment methods are often published as individual software packages across different programming languages (mostly R and Python) that do not allow users to adapt aspects of the method, such as the fit distribution, to their use-case. Existing open-source software packages, such as ISIMIP3BASD or CDFt, often do not offer an evaluation framework that covers multivariate (spatial, temporal, multi-variable) aspects necessary to detect misuse of methods, or user-specific impact metrics. Several of these issues apply to downscaling similarly.
To address some of these practical issues, we developed the open-source software package ibicus in collaboration with ECWMF (available on PyPi, extensive documentation https://ibicus.readthedocs.io/en/latest/index.html, published under Apache 2.0 licence). The package implements eight peer-reviewed bias adjustment methods in a common framework. It also includes an extensive evaluation framework covering multivariate aspects as well as the ETCCDI climate indices. The package thereby contributes to enhanced flexibility and ease-of-use of better evaluation practises in bias adjustment.
Our contribution presents three case studies using ibicus, highlighting a number of pitfalls in the usage of bias adjustment for climate impact modelling, and shows possible ways to address these issues. We investigate extreme indices of precipitation and compound extreme temperature-precipitation indices, modification of the climate change trend, and dry spell length as an example of a temporal index, over northern Spain and Turkey.
We evaluate how bias adjustment adds to the ‘cascade of uncertainty’ and how this can be made transparent in the different use-cases. We also demonstrate how some of the fundamental issues that can arise when applying bias adjustment can be detected and how evaluation of spatial and temporal aspects such as dry spell length can be made specific to the use-case at hand to detect improper use of bias adjustment. Lastly, we demonstrate how the ‘best’ bias adjustment method may depend on the metric of interest, and therefore a user-centric design of comparison and evaluation methods is necessary.
How to cite: Spuler, F., Wessel, J., Cagnazzo, C., and Comyn-Platt, E.: Case studies in bias adjustment: addressing potential pitfalls through model comparison and evaluation using a new open-source python package, EGU General Assembly 2023, Vienna, Austria, 24–28 Apr 2023, EGU23-14254, https://doi.org/10.5194/egusphere-egu23-14254, 2023.