- 1Geological Modelling Laboratory, Federal University of Pampa, Brazil
- 2Petróleo Brasileiro S.A. (Petrobras), Brazil
- 3Geosciences Institute, Federal University of Rio Grande do Sul, Brazil
3D point clouds of outcrops are digital representations of rock exposures used for geological surveying. These datasets often have high spatial density, up to a thousand points per square meter. By integrating georeferenced data into the 3D point cloud and applying remote sensing interpretation techniques, geoscientists can extract geological features and build 3D models. These models enable the integration of various types of georeferenced datasets, such as compositional, mineralogical, petrographic, structural, multi- and hyperspectral, geophysical, and petrophysical, across 1D, 2D, or 3D formats. However, manual interpretation of 3D point clouds remains labour-intensive, non-reproducible, and prone to human bias. Convolutional neural networks have been applied to segment the images used to build the 3D models, based on a few labelled training and testing subsets, to reduce the amount of human labour. This work used a U-Net encoder-decoder network architecture to segment images of sedimentary facies in reservoir analogue outcrop. The datasets vary in size from 500-1000 images with 40 MP resolution and in number of facies from 2-10. Different data processing pipelines were experimented with, including resizing and slicing due to memory constraints. Approximately 5-10 % of the images in each dataset were labelled by an expert interpreter, with half used for training and half for testing the model, yielding an overall accuracy of 70-85 %. The model was then retrained on the full labelled set and applied to the remaining unlabelled images. The final segmented outputs were processed through a photogrammetry pipeline to generate classified 3D point clouds, capturing the spatial distribution of architectural elements within the outcrop. This workflow allowed a reduction of 90% in manual labour with a high accuracy in the result.
How to cite: Gonçalves, Í., de Souza, E., Guadagnin, F., Roemers-Oliveira, E., Machado, E., Rangel, G., Freccia, A. C., Toledo, J., Schaffer, G., and Scherer, C.: Point cloud segmentation of sedimentary facies in outcrops with convolutional neural networks, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-8405, https://doi.org/10.5194/egusphere-egu26-8405, 2026.