EGU23-15241, updated on 26 Feb 2023
https://doi.org/10.5194/egusphere-egu23-15241
EGU General Assembly 2023
© Author(s) 2023. This work is distributed under
the Creative Commons Attribution 4.0 License.

3-D Reconstructions of Porous Media from 2-D input via Generative Adversarial Networks (GANs)

Hannah Vogel, Hamed Amiri, Austin Arias, Oliver Plümper, and Markus Ohl
Hannah Vogel et al.
  • Utrecht University, Faculty of Geosciences, Department of Earth Science, Utrecht, Netherlands (h.p.vogel@uu.nl)

Many macroscopic transport properties and physical processes, such as the flow of fluids through a porous medium, are directly controlled by its microstructure, specifically the presence and connectivity of individual pore spaces at micron and submicron scales. Reconstructing and evaluating the material properties of porous media plays a key role across many engineering disciplines from subsurface storage (e.g., CO2 and hydrogen) to geothermal energy and reservoir characterization. As such, the rapid and reliable characterization, evaluation, and simulation of complex pore microstructures is required not only to enhance our understanding of the fundamental processes occurring at the pore scale, but to also better estimate their material behavior on a larger scale.

These material behaviors are inherently volumetric and therefore cannot be accurately modelled using two-dimensional (2D) data alone. As a result, the accuracy of reconstruction techniques used to extract these morphological properties and spatial distributions is in part determined by the quality of available three-dimensional (3D) microstructural datasets. However, in comparison to their 3D counterparts, 2D imaging techniques are typically more cost efficient, easier to collect, and higher resolution. Our goal of generating statistically accurate 3D reconstructions of complex pore microstructural distributions based on high resolution 2D datasets is essential to bridging this dimensionality gap.

Newly explored 2D-to-3D reconstruction techniques based on deep-learning (DL) algorithms offer an alternative means of generating robust and statistically representative digital 3D rock reconstructions by measuring some spatial morphological properties and statistical microstructural descriptors (SMDs) of porous media samples from high-resolution 2D datasets. These DL models are highly flexible and capable of capturing a variety of complex microstructural features given representative 2D training datasets. In this paper, we implement a newly developed deep Generative Adversarial Network (GAN), known as SliceGAN, to synthesize novel binary digital 3D reconstructions using high-resolution 2D back-scattered electron (BSE) images obtained from thin-sections oriented in the x-, y- & z-direction.

Our trained model is capable of accurately reconstructing complex 3D microstructural features of porous media through capturing the underlying (micro-)structural and morphological properties contained in the original sample (2D) thin-sections. To demonstrate the effectiveness of our trained model, we conducted a comparative analysis between the generated 3D reconstructions and real sample datasets by evaluating morphological properties (volume fraction, surface area, equivalent diameter, pore orientations, etc.) as well as the widely popular SMD the two-point correlation function (S2 (r) ). The resulting reconstructions are virtually indistinguishable, both visually and statistically, from the real sample. Our research paves the way for quickly and accurately describing complex heterogenous media for the prediction of transport processes, for example, carbon and hydrogen storage and extraction.

How to cite: Vogel, H., Amiri, H., Arias, A., Plümper, O., and Ohl, M.: 3-D Reconstructions of Porous Media from 2-D input via Generative Adversarial Networks (GANs), EGU General Assembly 2023, Vienna, Austria, 24–28 Apr 2023, EGU23-15241, https://doi.org/10.5194/egusphere-egu23-15241, 2023.