HS1.3.2 | Bridging physical, analytical, information-theoretic and machine learning approaches to system dynamics and predictability in Hydrology and Earth System Sciences
EDI PICO
Bridging physical, analytical, information-theoretic and machine learning approaches to system dynamics and predictability in Hydrology and Earth System Sciences
Co-organized by NP2
Convener: Rui A. P. Perdigão | Co-conveners: Julia Hall, Cristina PrietoECSECS, Maria KireevaECSECS, Shaun HarriganECSECS
PICO
| Tue, 25 Apr, 16:15–18:00 (CEST)
 
PICO spot 4
Tue, 16:15
This session focuses on advances in theoretical, methodological and applied studies in hydrologic and broader earth system dynamics, regimes, transitions and extremes, along with their physical understanding, predictability and uncertainty, across multiple spatiotemporal scales.

The session further encourages discussion on interdisciplinary physical and data-based approaches to system dynamics in hydrology and broader geosciences, ranging from novel advances in stochastic, computational, information-theoretic and dynamical system analysis, to cross-cutting emerging pathways in information physics.

Contributions are gathered from a diverse community in hydrology and the broader geosciences, working with diverse approaches ranging from dynamical modelling to data mining, machine learning and analysis with physical process understanding in mind.

The session further encompasses practical aspects of working with system analytics and information theoretic approaches for model evaluation and uncertainty analysis, causal inference and process networks, hydrological and geophysical automated learning and prediction.

The operational scope ranges from the discussion of mathematical foundations to development and deployment of practical applications to real-world spatially distributed problems.

The methodological scope encompasses both inverse (data-based) information-theoretic and machine learning discovery tools to first-principled (process-based) forward modelling perspectives and their interconnections across the interdisciplinary mathematics and physics of information in the geosciences.

Take part in a thrilling session exploring and discussing promising avenues in system dynamics and information discovery, quantification, modelling and interpretation, where methodological ingenuity and natural process understanding come together to shed light onto fundamental theoretical aspects to build innovative methodologies to tackle real-world challenges facing our planet.

PICO: Tue, 25 Apr | PICO spot 4

Chairpersons: Rui A. P. Perdigão, Julia Hall, Cristina Prieto
16:15–16:20
16:20–16:22
|
PICO4.1
|
EGU23-4039
|
HS1.3.2
|
ECS
|
On-site presentation
|
Manuel Álvarez Chaves, Anneli Guthke, Uwe Ehret, and Hoshin Gupta

The use of “hybrid” models that combine elements from physics-based and data-driven modelling approaches has grown in popularity and acceptance in recent years, but these models also present a number of challenges that must be addressed in order to ensure their effectiveness and reliability. In this project, we propose a toolbox of methods based on information theory as a step towards a unified framework for the diagnostic evaluation of “hybrid" models. Information theory provides a set of mathematical tools that can be used to study input data, model architecture and predictions, which can be helpful in understanding the performance and limitations of “hybrid” models.

Through a comprehensive case study of rainfall-runoff hydrological modelling, we show how a very simple physics-based model can be coupled in different ways with neural networks to develop “hybrid” models. The proposed toolbox is then applied to these “hybrid” models to extract insights which guide towards model improvement and refinement. Diagnostic scores based on the entropy (H) of individual predictions and the Kullback-Leibler divergence (KLD) between predictions and observations are introduced. Mutual information (I) is also used as a more all-encompassing metric which informs on the aleatory and epistemic uncertainties of a particular model. In order to address the challenge of calculating quantities from information theory on continuous variables (such as streamflow), the toolbox takes advantage of different estimators of differential entropy, namely: binning, kernel density estimation (KDE) and k-nearest neighbors (k-NN).

How to cite: Álvarez Chaves, M., Guthke, A., Ehret, U., and Gupta, H.: UNITE: A toolbox for unified diagnostic evaluation of physics-based, data-driven and hybrid models based on information theory, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-4039, https://doi.org/10.5194/egusphere-egu23-4039, 2023.

16:22–16:24
|
PICO4.2
|
EGU23-487
|
HS1.3.2
|
ECS
|
On-site presentation
|
Abhilash Singh and Kumar Gaurav

We propose Physics Informed Machine Learning (PIML) algorithms to estimate surface soil moisture from Sentinel-1/2 satellite images based on Artificial Neural Networks (ANN). We have used Improved Integral Equation Model (I2EM) to simulate the radar images backscatter in VV polarisation. In addition, we selected a set of different polarisations, i.e.; (VH, VH/VV, VH-VV), incidence angle, Normalised Difference Vegetation Index (NDVI), and topography as input features to map surface soil moisture. We have used two different approaches to predict soil moisture using PIML. In the first approach, we developed an observation bias in which we selected the difference of backscatter value at each pixel in VV polarisation from satellite and derived from theoretical model derived as one of the input features. Our second approach is based on learning bias, in which we modified the loss function with the help of the I2EM model. Our result shows the learning bias PIML outperforms the observation bias PIML with R = 0.94, RMSE = 0.019 m3/m3, and bias = -0.03. We have also compared the performance with the standalone benchmark algorithms. We observed the learning bias PIML emerged as the most accurate model to estimate the surface soil moisture. The proposed approach is a step forward in estimating accurate surface soil moisture at high spatial resolution from remote sensing images.

How to cite: Singh, A. and Gaurav, K.: A physics-informed machine learning approach to estimate surface soil moisture, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-487, https://doi.org/10.5194/egusphere-egu23-487, 2023.

16:24–16:26
|
PICO4.3
|
EGU23-10707
|
HS1.3.2
|
On-site presentation
Milan Paluš

Inference of causality and understanding of extreme events are two intensively developing multidisciplinary areas highly relevant for the Earth sciences. Surprisingly, there is only a limited interaction of the two research areas.

Quantification of causality in terms of improved predictability was proposed by the father of cybernetics N. Wiener [1] and formulated for time series by C.W.J. Granger [2]. The Granger causality evaluates predictability in bivariate autoregressive models. This concept has been generalized for nonlinear systems using methods rooted in information theory [3]. The information theory of Shannon, however, usually ignores two important properties of Earth system dynamics: the evolution on multiple time scales and heavy-tailed probability distributions. While the multiscale character of complex dynamics, such as the air temperature variability, can be studied within the Shannonian framework in combination with the wavelet transform [4], the entropy concepts of Rényi and Tsallis have been proposed to cope with variables with heavy-tailed probability distributions. We will discuss how such non-Shannonian entropy concepts can be applied in inference of causality in systems with heavy-tailed probability distributions and extreme events. Using examples from the climate system, we will focus on causal effects of the North Atlantic Oscillation, blocking events and the Siberian high on winter and spring cold waves in Europe, including the April 2021 frosts endangering French vineyards. Using the non-Shannonian information-theoretic concepts we bridge the inference of causality and understanding of the occurrence of extreme events.

Supported by the Czech Academy of Sciences, Praemium Academiae awarded to M. Paluš.

[1] N. Wiener, in: E. F. Beckenbach (Editor), Modern Mathematics for Engineers (McGraw-Hill, New York, 1956)

[2] C.W.J. Granger, Econometrica 37 (1969) 424

[3] K. Hlaváčková-Schindler  et al., Phys. Rep. 441 (2007)  1; M. Paluš, M. Vejmelka, Phys. Rev. E 75 (2007) 056211; J. Runge et al., Nature Communications 6 (2015) 8502

[4] M. Paluš, Phys. Rev. Lett. 112 (2014) 078702; N. Jajcay, J. Hlinka, S. Kravtsov, A. A. Tsonis, M. Paluš, Geophys. Res. Lett. 43(2) (2016) 902–909

How to cite: Paluš, M.: Non-Shannonian information theory connects inference of causality and understanding of extreme events, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-10707, https://doi.org/10.5194/egusphere-egu23-10707, 2023.

16:26–16:28
|
PICO4.4
|
EGU23-5
|
HS1.3.2
|
ECS
|
On-site presentation
Marco Albert Öttl, Jens Bender, and Jürgen Stamm

In the analysis regarding the stability of river dikes, the interactions between the load magnitude of the flood level and the resulting percolation are found to be a highly relevant process. After all, the seepage line separates the cross-sectional area into the water-saturated and the unsaturated crosssectional parts. For homogeneous levees, the position of the seepage line in the stationary case is imprinted in the system by the outer cubature and is well on the safe side for real flood events. In the non-stationary case, the position of the seepage line depends primarily on the changing water level of a flood hydrograph, the resulting water content and suction stresses in the dike, as well as the saturated permeability of the dike construction materials. In the current dimensioning practice according to DIN 19712 and the German DWA-M-507, the characteristic of the hydrograph is not directly applied. So far, for example, the resulting damming duration of a flood hydrograph is only considered indirectly.
This paper presents a methodology, which quantifies natural dependency structures for a selected dike section by synthetically generated dimensioning hydrographs in a probabilistic design. These results are then integrated directly into the geohydraulic process of water penetration. Based on selected water level and discharge time series at a dike section, flood waves can be described in five parameters using the extended flood characteristic simulation according to MUNLV1. After successfully adapting suitable distribution functions, dependencies in the load structure are quantified in the next step using Copula function. Subsequently, any number of synthetic flood hydrographs can be generated by combining these parameters. In keeping with the principle of the Monte Carlo simulation, a sufficiently high number of synthetic events results in extreme conditions with a low probability of occurrence being reliably represented.
Using a developed routine, the process of moisture penetration for the individual flood hydrographs can be simulated and visualized in a transient, geohydraulically numerical model at different points in times. Finally, statements regarding the behavior patterns of the resulting seepage lines, based on the loading situation can be derived and predicted. Based on these results, a reliability analysis then shows the stability of the dike section under the given extreme conditions.

1Ministerium für Umwelt, Landwirtschaft, Natur und Verbraucherschutz des Landes Nordrhein-Westfalen

How to cite: Öttl, M. A., Bender, J., and Stamm, J.: Probabilistic analysis of river levees under consideration of time-dependent loads, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-5, https://doi.org/10.5194/egusphere-egu23-5, 2023.

16:28–16:30
|
PICO4.5
|
EGU23-13068
|
HS1.3.2
|
ECS
|
On-site presentation
Márk Somogyvári, Ute Fehrenbach, Dieter Scherer, and Tobias Krueger

The standard approach of modeling lake level dynamics today is via process-based modeling. The development of such models requires an extensive knowledge about the investigated system, especially the different hydrological flow processes. When some of this information is missing, these models could provide distorted results and could miss important system characteristics.

In this study, we show how data-driven modeling can help the identification of the key drivers of lake level changes. We are using the example of the Groß Glienicker Lake, a glacial, groundwater fed lake near Berlin. This lake has been experiencing a drastic loss of water in recent decades, whose trend became even faster in the last few years. There is a local controversy whether these changes are mainly weather driven, or caused by water use; and what mitigation measures could be used to counteract them. Due to the strong anthropogenic influence from multiple water-related facilities near the lake, and the lack of geological information from the catchment, there are many unknows about the properties of the hydrological processes, hence the development of a process-based model in the area is challenging. To understand the system better we combine data-driven models with water balance approaches and use this methodology as an alternative to classic hydrological modeling.

The climatic model input (catchment-average precipitation and actual evapotranspiration) is generated by the Central European Refinement dataset (CER), which is a meteorological dataset generated by dynamically downscaling the Weather Research and Forecasting model (Jänicke et al., 2017). First, a data-driven model is constructed to predict the changes in lake levels one day ahead by using precipitation and evapotranspiration values from the last two months, a time interval that was selected after an extensive parameter analysis. This model is then further extended by additional inputs, such as water abstraction rates, river and groundwater levels. The fits of the different simulated lake levels are evaluated to identify the effects of the relevant drivers of the lake level dynamics. For a more mechanistic interpretation, a monthly water balance model was created using the same dataset. By calculating the different fluxes within the system, we were able to estimate the magnitudes of unobserved hydrological components.

With the help of our modeling approach, we could rule out the influence of one of the nearby waterworks and a river. We have also found that the lake level dynamics over the last two decades was mainly weather-driven, and the lake level fluctuations could be explained with changes in precipitation and evapotranspiration. With the water balance modeling, we have shown that the long-term net outflux from the lake catchment has increased in the last few years. These findings are used to support the development of a local high-resolution hydrogeological model, which could be used to further analyze these processes.

References

Jänicke, B., Meier, F., Fenner, D., Fehrenbach, U., Holtmann, A., Scherer, D. (2017): Urban-rural differences in near-surface air temperature as resolved by the Central Europe Refined analysis (CER): sensitivity to planetary boundary layer schemes and urban canopy models. Int. J. Climatol. 37 (4), 2063-2079. DOI: 10.1002/joc.4835

How to cite: Somogyvári, M., Fehrenbach, U., Scherer, D., and Krueger, T.: Identifying the drivers of lake level dynamics using a data-driven modeling approach, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-13068, https://doi.org/10.5194/egusphere-egu23-13068, 2023.

16:30–16:32
|
PICO4.6
|
EGU23-14704
|
HS1.3.2
|
Highlight
|
On-site presentation
Steven Weijs, Alexander Werenka, and Daniel Kovacek

Streamflow monitoring is a key input to water resource management, as it is an important source of information for understanding hydrological processes and prediction catchment behaviour and resulting flows. Both the monitored and the predicted flows support important decisions in areas such as infrastructure design, flood forecasting and resource allocation. It is therefore essential that the predictive information we have about our water resources serves these various needs.

Since observations are from the past and our decisions affect the future, models are needed to extrapolate measurements in time. Similarly, streamflow is not always measured at places where the information is needed, so interpolation or extrapolation is needed in space or across catchment properties and climates. Recent advances in publicly available large datasets of streamflow records and corresponding catchment characteristics have enabled succesful applications of machine learning to this prediction problem, leading to increased predictability in ungauged basins.

Since information content is related to surprise, we could see the objective of monitoring networks as manufacturing surprising data. This is formalized in approaches for monitoring network design based on information theory, where often the information content of the sources, i.e. the existing monitoring stations, has been investigated, including the effects of redundancy due to shared information between stations.

In this research, we argue that information content is related to unpredictability, but is inevitably filtered through several layers, which should be considered for monitoring network design. Examples of such filters are the models used for extrapolation to ungauged sites of interest, the target statistics of interest to be predicted, and the decision making purpose of those predictions. This means that the optimal monitoring strategy (where to measure, with how much precision and resolution, and for how long) depend on evolving modeling capabilities and representation of societal needs. Also, biases in the current neworks may exist as a function of how they are funded.

In this presentation, these theoretical aspects are investigated with examples from an ongoing project to investigate the streamflow monitoring network in British Columbia, Canada, which recently experienced record-breaking floods. 

How to cite: Weijs, S., Werenka, A., and Kovacek, D.: Manufacturing surprise: How information content, modeling capabilities and decision making purpose influence optimal streamflow monitoring, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-14704, https://doi.org/10.5194/egusphere-egu23-14704, 2023.

16:32–16:34
|
PICO4.7
|
EGU23-16510
|
HS1.3.2
|
Highlight
|
On-site presentation
|
Praveen Kumar and Leila Hernandez Rodriguez

Turbulence at the biosphere-atmosphere interface refers to the presence of chaotic and chaotic-like fluctuations or patterns in the exchange of energy, matter, or information between the biosphere and atmosphere. These fluctuations can occur at various scales. Turbulence at the biosphere-atmosphere interface can affect the transfer of heat, moisture, and gases. In this study, we use causal discovery to explore how high-frequency data (i.e., 10 Hz) of different variables at a flux tower, such as wind speed, air temperature, and water vapor, exhibit interdependencies. We use Directed Acyclic Graphs (DAGs) to identify how these variables influence each other at a high frequency. We tested the hypothesis that there are different types of DAGs present during the daytime at the land-atmosphere interface, and we developed an approach to identify patterns of DAGs that have similar behavior. To do this, we use distance-based classification to characterize the differences between DAGs and a k-means clustering approach to identify the number of clusters. We look at sequences of DAGs from 3-minute periods of high-frequency data to study how the causal relationships between the variables change over time. We compare our results from a clear sky day to a solar eclipse to see how changes in the environment affect the relationships between the variables. We found that during periods of high primary productivity, the causal relationship between water vapor and carbon dioxide shows a strong coupling between photosynthesis and transpiration. At high frequencies, we found that thermodynamics influences the dynamics of water vapor and carbon dioxide. Our framework makes possible the study of how dependence in turbulence is manifested at high frequencies at the land-atmosphere interface.

How to cite: Kumar, P. and Rodriguez, L. H.: Evolution of Causal Structure of Interactions in Turbulence at the Biosphere-Atmosphere interface, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-16510, https://doi.org/10.5194/egusphere-egu23-16510, 2023.

16:34–16:36
|
PICO4.8
|
EGU23-11222
|
HS1.3.2
|
ECS
|
Virtual presentation
Sadegh Sadeghi Tabas and Vidya Samadi

This research investigated the applicability of a probabilistic physics-informed Deep Learning (DL) algorithm, i.e.,deep autoregressive network (DeepAR), for rainfall-runoff modeling across the continental United States (CONUS). Various catchment physical parameters were incorporated into the probabilistic DeepAR algorithm with various spatiotemporal variabilities to simulate rainfall-runoff processes across Hydrologic Unit Code 8 (HUC8). We benchmarked our proposed model against several physics-based hydrologic approaches such as Sacramento Soil Moisture Accounting Model (SAC-SMA), Variable Infiltration Capacity (VIC), Framework for Understanding Structural Errors (FUSE), Hydrologiska Byråns Vattenbalansavdelning (HBV), and the mesoscale hydrologic model (mHM). These approaches were implemented using Catchment Attributes and Meteorology for Large-sample Studies (CAMELS), Maurer datasets. Analysis suggested that catchment physical attributes such as drainage area have significant impacts on rainfall-runoff generation mechanisms while catchment fraction of carbonate sedimentary rocks parameter’s contribution were insignificant. The results of the proposed physics-informed DeepAR simulation were comparable and somewhat superior to the well-known conceptual hydrologic models across CONUS.  

How to cite: Sadeghi Tabas, S. and Samadi, V.: A Probabilistic Physics-informed Deep Learning Model for Rainfall-runoff Prediction across Continental United States, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-11222, https://doi.org/10.5194/egusphere-egu23-11222, 2023.

16:36–16:38
|
PICO4.9
|
EGU23-15968
|
HS1.3.2
|
Virtual presentation
|
Chaopeng Shen, Alison Appling, Pierre Gentine, Toshiyuki Bandai, Hoshin Gupta, Alexandre Tartakovsky, Marco Baity-Jesi, Fabrizio Fenicia, Daniel Kifer, Xiaofeng Liu, Li Li, Dapeng Feng, Wei Ren, Yi Zheng, Ciaran Harman, Martyn Clark, Matthew Farthing, and Praveen Kumar

Process-Based Modeling (PBM) and Machine Learning (ML) are often perceived as distinct paradigms in the geosciences. Here we present differentiable geoscientific modeling as a powerful pathway toward dissolving the perceived barrier between them and ushering in a paradigm shift. For decades, PBM offered benefits in interpretability and physical consistency but struggled to efficiently leverage large datasets. ML methods, especially deep networks, presented strong predictive skills yet lacked the ability to answer specific scientific questions. While various methods have been proposed for ML-physics integration, an important underlying theme  — differentiable modeling — is not sufficiently recognized. Here we outline the concepts, applicability, and significance of differentiable geoscientific modeling (DG). “Differentiable” refers to accurately and efficiently calculating gradients with respect to model variables, critically enabling the learning of high-dimensional unknown relationships. DG refers to a range of methods connecting varying amounts of prior knowledge to neural networks and training them together, capturing a different scope than physics-guided machine learning and emphasizing first principles. In this talk we provide examples of DG in global hydrology, ecosystem modeling, water quality simulations, etc. Preliminary evidence suggests DG offers better interpretability and causality than ML, improved generalizability and extrapolation capability, and strong potential for knowledge discovery, while approaching the performance of purely data-driven ML. DG models require less training data while scaling favorably in performance and efficiency with increasing amounts of data. With DG, geoscientists may be better able to frame and investigate questions, test hypotheses, and discover unrecognized linkages. 

How to cite: Shen, C., Appling, A., Gentine, P., Bandai, T., Gupta, H., Tartakovsky, A., Baity-Jesi, M., Fenicia, F., Kifer, D., Liu, X., Li, L., Feng, D., Ren, W., Zheng, Y., Harman, C., Clark, M., Farthing, M., and Kumar, P.: Differentiable modeling to unify machine learning and physical models and advance Geosciences, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-15968, https://doi.org/10.5194/egusphere-egu23-15968, 2023.

16:38–16:40
|
PICO4.10
|
EGU23-16678
|
HS1.3.2
|
Virtual presentation
Elnaz Heidari and Vidya Samadi

Recent years have seen an uptick in the frequency of flood records occurring in the United States, with South Carolina (SC) being particularly hard hit. This study developed various deep recurrent neural networks (DRNNs) such as Vanilla RNN, long short-term memory (LSTM), and Gated Recurrent Unit (GRU) for flood event simulation. Precipitation and the USGS gaging data were preprocessed and fed into the DRNNs to predict flood events across several catchments in SC. The DRNNs are trained and evaluated using hourly datasets, and the outcomes were then compared with the observed data and the National Water Model (NWM) simulations. Analysis suggested that LSTM and GRU networks skillfully predicted the shape of flood hydrographs, including rising/falling limb, peak rates, flood volume, and time to peak, while the NWM vastly overestimated flood hydrographs. Among different climatic variables that were forced into the DRNNs, rainfall amount and spatial distribution were the most dominant input variables for flood prediction in SC.

How to cite: Heidari, E. and Samadi, V.: Application of Deep Recurrent Neural Networks for Flood Prediction and Assessment, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-16678, https://doi.org/10.5194/egusphere-egu23-16678, 2023.

16:40–16:42
|
PICO4.11
|
EGU23-8388
|
HS1.3.2
|
ECS
|
Highlight
|
Virtual presentation
Shivendra Srivastava, Nishant Kumar, Arindam Malakar, Sruti Das Choudhury, Chittaranjan Ray, and Tirthankar Roy

Globally, agriculture irrigation accounts for 70% of water use and is facing extensive and increasing water constraints. Well-designed irrigation scheduling can help determine the appropriate timing and water requirement for crop development and consequently improve water use efficiency. This research aims to assess the probability of irrigation needed for agricultural operations, considering soil moisture, evaporation, and leaf area index as indicators of crop water requirement. The decision on irrigation scheduling is taken based on a three-step methodology. First, relevant variables for each indicator are identified using a Random Forest regressor, followed by the development of a Long Short-Term Memory (LSTM) model to predict the three indicators. Second, errors in the simulation of each indicator are calculated by comparing the predicted values against the actual values, which are then used to calculate the error weights (normalized) of the three indicators for each month (to capture the seasonal variations). Third, the empirical distribution of each indicator is obtained for each month using the estimated error values, which are then adjusted based on the error weights calculated in the previous step. The probabilities of three threshold values (for each indicator) are considered, which correspond to three levels of irrigation requirement, i.e., low, medium, and high. The proposed approach provides a probabilistic framework for irrigation scheduling, which can significantly benefit farmers and policymakers in more informed decision-making related to irrigation scheduling.

How to cite: Srivastava, S., Kumar, N., Malakar, A., Choudhury, S. D., Ray, C., and Roy, T.: An ML-based Probabilistic Approach for Irrigation Scheduling, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-8388, https://doi.org/10.5194/egusphere-egu23-8388, 2023.

16:42–16:44
|
PICO4.12
|
EGU23-6617
|
HS1.3.2
|
Highlight
|
Virtual presentation
Rui A. P. Perdigão and Julia Hall

We introduce and illustrate our recently developed Augmented Information Physical Systems Intelligence (AIPSI), leveraging and enhancing our proprietary Information Physical Artificial Intelligence (IPAI) and Earth System Dynamical Intelligence (ESDI) to further the mathematically robust, physically consistent and computationally efficient holistic articulation and integration across the latest advances in fundamental physics, geophysical sciences and information technologies.

In theoretical terms, AIPSI brings out a more general principled lingua franca and formal construct to complex system dynamics and analytics beyond traditional hybridisation among stochastic-dynamic, information-theoretic, artificial intelligence and mechanistic techniques.

In practical terms, it empowers improved high-resolution spatiotemporal early detection, robust attribution, high-performance forecasting and decision support across multissectorial theatres of operation pertaining multiple interacting hazards, natural, social and hybrid.

With operational applications in mind, AIPSI methodologically improves the sharp trade-off between speed and accuracy of multi-hazard phenomena sensing, analysis and simulation techniques, along with the quantification and management of the associated uncertainties and predictability with sharper spatio-temporal resolution, robustness and lead.

This is further supported by the advanced Meteoceanics QITES constellation providing coordinated volumetric dynamic sensing and processing of gravitational and electrodynamic fluctuations, thereby providing an instrumentation ecosystem for anticipatory early detection of extreme events such as flash floods, explosive cyclogenesis and imminent disruptive structural critical transitions across built and natural environments.

With the methodological developments at hand, a diverse set of applications to critical theatres of operation are presented, ranging from early detection, advance modelling and decision support to environmental and security agencies entrusted with the protection and nurturing of our society and the environment. Contributing to empowering a more robust early detection, preparedness, response, mitigation and recovery across complex socio-environmental hazards such as those involving massive wildfires, floods and their nonlinear compound interplay, their underlying mechanisms and consequences.

The presentation concludes with an overview of a new large-scale international initiative on multi-hazard risk intelligence networks, where an eclectic diversity of actors ranging from academia and industry to institutions and the civil society come together to co-create emerging pathways for taking this challenging quest even further, in a fundamental coevolution between cutting-edge science, groundbreaking technology and socio-environmental insights to further enrich the ever-learning system dynamic framework at the core of our multi-hazard research and service.

Acknowledgement: This contribution is funded by the Εuropean Union under the Horizon Europe grant 101074004 (C2IMPRESS).

 

How to cite: Perdigão, R. A. P. and Hall, J.: Augmented Information Physical Systems Intelligence (AIPSI) for enhanced spatiotemporal early detection, attribution, prediction and decision support on multi-hazards, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-6617, https://doi.org/10.5194/egusphere-egu23-6617, 2023.

16:44–18:00