EGU23-11696
https://doi.org/10.5194/egusphere-egu23-11696
EGU General Assembly 2023
© Author(s) 2023. This work is distributed under
the Creative Commons Attribution 4.0 License.

A New Strategy for Training Deep Learning Ensembles

Tobias Schanz1,2 and David Greenberg1
Tobias Schanz and David Greenberg
  • 1Helmholtz-Zentrum Hereon, Model-driven machine learning, Germany (tobias.machnitzki@hereon.de)
  • 2International Max-Planck-Research School on Earth System Modeling

Quantifying the error of predictions in earth system models is just as important as the quality of the predictions themselves. While machine learning methods become better by the day in emulating weather and climate forecasting systems, they are rarely used operationally. Two reasons for this are poor handling of extreme events and a lack of uncertainty quantification. The poor handling of extreme events can mainly be attributed to loss functions emphasizing accurate prediction of mean outcomes. Since extreme events are not frequent in climate and weather applications, capturing them accurately is not a natural consequence of minimizing such a loss. Uncertainty quantification for numerical weather prediction usually proceeds through creating an ensemble of predictions. The machine learning domain has adapted this to some extent, creating machine learning ensembles, with multiple architectures trained on the same data or the same architecture trained on altered datasets. Nevertheless, few approaches currently exist for tuning a deep learning ensemble. 

We introduce a new approach using a generative neural network, similar to those employed in adversarial learning, but we replace the discriminator with a new loss function. This gives us the control over the statistical properties the generator should learn and increases the stability of the training process immensely. By generating a prediction ensemble during training, we can tune ensemble properties such as variance or skewness in addition to the mean. Early results of this approach will be demonstrated using simple 1D experiments, showing the advantage over classically trained neural networks. Especially the task of predicting extremes and the added value of ensemble predictions will be highlighted. Additionally, predictions of a Lorenz-96 system are demonstrated to show the skill in forecasting chaotic systems.

How to cite: Schanz, T. and Greenberg, D.: A New Strategy for Training Deep Learning Ensembles, EGU General Assembly 2023, Vienna, Austria, 24–28 Apr 2023, EGU23-11696, https://doi.org/10.5194/egusphere-egu23-11696, 2023.