EGU2020-11263
https://doi.org/10.5194/egusphere-egu2020-11263
EGU General Assembly 2020
© Author(s) 2020. This work is distributed under
the Creative Commons Attribution 4.0 License.

Deep reinforcement learning in World-Earth system models to discover sustainable management strategies

Felix Strnad1, Wolfram Barfuss2,3, Jonathan Donges3, and Jobst Heitzig1
Felix Strnad et al.
  • 1Potsdam Institute for Climate Impact Research, FutureLab on Game Theory and Networks of Interacting Agents, Research Department 4: Complexity Science, Potsdam, Germany (strnad@pik-potsdam.de)
  • 2Max Planck Institute for Mathematics in the Sciences, Leipzig, Germany
  • 3Potsdam Institute for Climate Impact Research, FutureLab on Earth Resilience in the Anthropocene, Research Department 1: Earth System Analysis, Potsdam, Germany

The identification of pathways leading to robust mitigation of dangerous anthropogenic climate change is nowadays of particular interest 
not only to the scientific community but also to policy makers and the wider public. 

Increasingly complex, non-linear World-Earth system models are used for describing the dynamics of the biophysical Earth system and the socio-economic and socio-cultural World of human societies and their interactions. Identifying pathways towards a sustainable future in these models is a challenging and widely investigated task in the field of climate research and broader Earth system science.  This problem is especially difficult when caring for both environmental limits and social foundations need to be taken into account.

In this work, we propose to combine recently developed machine learning techniques, namely deep reinforcement learning (DRL), with classical analysis of trajectories in the World-Earth system as an approach to extend the field of Earth system analysis by a new method. Based on the concept of the agent-environment interface, we develop a method for using a DRL-agent that is able to act and learn in variable manageable environment models of the Earth system in order to discover management strategies for sustainable development.

We demonstrate the potential of our framework by applying DRL algorithms to stylized World-Earth system models. The agent can apply management options to an environment, an Earth system model, and learn by rewards provided by the environment. We train our agent with a deep Q-neural network extended by current state-of-the-art algorithms. Conceptually, we thereby explore the feasibility of finding novel global governance policies leading into a safe and just operating space constrained by certain planetary and socio-economic boundaries.  

We find that the agent is able to learn novel, previously undiscovered policies that navigate the system into sustainable regions of the underlying conceptual models of the World-Earth system. In particular, the artificially intelligent agent learns that the timing of a specific mix of taxing carbon emissions and subsidies on renewables is of crucial relevance for finding World-Earth system trajectories that are sustainable in the long term. Overall, we show in this work how concepts and tools from artificial intelligence can help to address the current challenges on the way towards sustainable development.

Underlying publication

[1] Strnad, F. M.; Barfuss, W.; Donges, J. F. & Heitzig, J. Deep reinforcement learning in World-Earth system models to discover sustainable management strategies Chaos: An Interdisciplinary Journal of Nonlinear Science, AIP Publishing LLC, 2019, 29, 123122

How to cite: Strnad, F., Barfuss, W., Donges, J., and Heitzig, J.: Deep reinforcement learning in World-Earth system models to discover sustainable management strategies, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11263, https://doi.org/10.5194/egusphere-egu2020-11263, 2020

Comments on the presentation

AC: Author Comment | CC: Community Comment | Report abuse

Presentation version 4 – uploaded on 01 May 2020 , no comments
Updated and corrected reference list.
Presentation version 3 – uploaded on 29 Apr 2020 , no comments
Corrected some typos and graphic issues.
Presentation version 2 – uploaded on 29 Apr 2020 , no comments
Updated layout of the poster.
Presentation version 1 – uploaded on 28 Apr 2020 , no comments