ESSI3.4 | Open source software tools in geosciences
Mon, 08:30
PICO
Open source software tools in geosciences
Convener: George P. Petropoulos | Co-conveners: Ionut Cosmin SandricECSECS, Spyridon E. DetsikasECSECS, Prashant Kumar Srivastava, Daniela FuzzoECSECS
PICO
| Mon, 28 Apr, 08:30–10:15 (CEST)
 
PICO spot 4
Mon, 08:30
In recent decades, the advent in geoinformation technology has played an increasingly important role in determining various parameters that characterize the Earth's environment. These technologies often combined with conventional field surveying and spatial data analysis methods and/or simulation process models provide efficient means for monitoring and understanding Earth’s environment in a cost-effective and systematic manner. This session invites contributions focusing on modern open-source software tools developed to facilitate the analysis of mainly geospatial data in any branch of geosciences for the purpose of better understanding Earth’s natural environment. We encourage the contribution of any kind of open source tools, including those that are built on top of global used commercial GIS solutions. Potential topics for the session include the presentation of software tools developed for displaying, processing and analysing geospatial data and modern cloud webGIS platforms and services used for geographical data analysis and cartographic purposes. We also welcome contributions that focus on presenting tools that make use of parallel processing on high performance computers (HPC) and graphic processing units (GPUs) and also on simulation process models applied in any field of geosciences.

PICO: Mon, 28 Apr | PICO spot 4

PICO presentations are given in a hybrid format supported by a Zoom meeting featuring on-site and virtual presentations. The button to access the Zoom meeting appears just before the time block starts.
Chairpersons: Ionut Cosmin Sandric, George P. Petropoulos, Spyridon E. Detsikas
08:30–08:32
|
PICO4.1
|
EGU25-74
|
ECS
|
On-site presentation
Julien Perret, Mark W. Jessell, and Eliott Bétend

Digital geological mapping has progressed significantly with the advent of commercial GIS, GPS technologies, and portable devices over the past three decades. However, many software and app tools that enhance field data collection remain proprietary and specific to mapping projects or organisations, which limits their integration and sharing within the geoscientific community. This presentation will introduce GEOL-QMAPS, an open-source, QGIS-based solution designed for flexible and harmonised digital geological mapping, developed as part of the West African eXploration Initiative 4. GEOL-QMAPS includes a QGIS field data entry template and a custom QGIS plugin, both available on open-access online repositories. It supports fieldwork on tablets or mobile devices via QField app, integrates with desktop QGIS, and facilitates the creation of new and legacy field databases according to user-defined guidelines. The presentation will cover the general workflow for implementing GEOL-QMAPS and provide examples demonstrating its effectiveness both in the field and in office settings.

How to cite: Perret, J., Jessell, M. W., and Bétend, E.: An open-source, QGIS-based solution for digital geological mapping: GEOL-QMAPS, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-74, https://doi.org/10.5194/egusphere-egu25-74, 2025.

08:32–08:34
|
PICO4.2
|
EGU25-507
|
ECS
|
On-site presentation
Yingfeng Guo

Spatial heterogeneity (SH), known as the second law of geography, has been a topic of extensive research. One common approach to analyzing SH involves comparing variances between and within strata to assess the impact of independent variables on the dependent variable. This method, known as spatial stratified heterogeneity (SSH) analysis, is often performed using the geographical detector model. Over time, several optimized versions of geographical detectors have emerged, focusing on discretizing single or dual variables. However, methods for discretizing three or more variables are still limited to the interaction detector, with research on spatial scale effects mainly focused on single factors. To overcome these limitations, an optimal multivariate-stratification geographical detector (OMGD) model has been developed. This model includes two additional modules: factor discretization optimization and scale detector. Fine-tuning factor discretization involves using five univariate and five cluster-based stratification methods to automatically explore the optimal discretization scheme for single factors or multi-factor combinations based on the Geodetector 𝑞 statistics. The scale detector can then iterate through various spatial scales to identify the optimal spatial scale for SSH analysis. Furthermore, the developed OMGD model has been tested with multiple case datasets to validate its applicability and robustness. The findings demonstrate that the OMGD model can effectively extract the main attributes of single factors and multi-factor combinations, providing a better explanation for geographical phenomena. It can also automatically determine the best spatial scale for SSH analysis, thereby enhancing the overall capability of conducting SSH analysis with the geographical detector.

How to cite: Guo, Y.: An optimal multivariate-stratification geographical detector model for revealing the impact of multi-factor combinations on the dependent variable, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-507, https://doi.org/10.5194/egusphere-egu25-507, 2025.

08:34–08:36
|
PICO4.3
|
EGU25-1311
|
On-site presentation
David Youssefi, Valentine Bellet, Alexandre Constantin, Dimitri Lallement, Emmanuel Dubois, and Yannick Tanguy
After several years of development, mid-2025 should see the launch of the CO3D1 (Optical Constellation in 3D) mission. The CO3D project is a public-private partnership between CNES (French space agency) and AIRBUS (aerospace company).

The mission aims to launch pairs of optical satellites to photogrammetrically reconstruct the Digital Surface Model (DSM) of the Earth's land surface. A notable innovation is the ability to manage stereo acquisition in synchronous mode for capturing moving elements.

As part of the CO3D program, CNES is developing the 2D and 3D product processing chains integrated into the mission's ground segment, alongside an Image Calibration Center (ICC) for radiometric and geometric calibration. These systems leverage a suite of open-source tools created by CNES teams. CNES chooses to release these tools prior to the mission's launch to gather user feedback and improve the quality of final CO3D products.

  • CARS2 is the Multiview Stereo Framework (MVS). It produces DSM from satellite images acquired from different view angles.
  • PANDORA3 is the stereo matching framework. It combines both switchable similarity measures at the pixel level and regularizers. The CARS pipeline integrates PANDORA.
  • BULLDOZER4 removes above-ground elements (e.g., trees, buildings) from DSM generated by CARS to produce Digital Terrain Models (DTM).
  • XDEM5 evaluates and validates the 3D quality of the generated models (i.e., the CARS DSM and the Bulldozer DTM).
  • SLURP6 produces land cover masks from very high-resolution (VHR) images. The CARS pipeline integrates these masks to generate sharper 3D reconstruction.

CO3D mission represents a significant step forward in Earth observation, offering innovative tools for producing accurate digital surface models. By engaging with the user community early, CNES aims to ensure the mission delivers results that fit user needs.

[1] Lebegue, L. et al. (2024). CO3D Products Qualification Forecast. IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium, 8555-8559.
[2] Youssefi, D. et al. (2020). CARS: A Photogrammetry Pipeline Using Dask Graphs to Construct A Global 3D Model. IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium, 453-456.
[3] Cournet, M. et al. (2020). Ground-truth generation and disparity estimation for optical satellite imagery. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences.
[4] Lallement, D. et al. (2023). Bulldozer, a free open source scalable software for DTM extraction. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences.
[5] Hugonnet, R. et al. (2022). Uncertainty Analysis of Digital Elevation Models by Spatial Inference From Stable Terrain. IEEE Journal Selected Topics in Applied Earth Observations and Remote Sensing, 15, 6456–6472.
[6] Tanguy, Y. et al. (2024). Smart Land Use Masks: A Simple and Robust Approach to Produce Low/High Vegetation Masks from a Single High Resolution Satellite Image. IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium, 4164-4168.

How to cite: Youssefi, D., Bellet, V., Constantin, A., Lallement, D., Dubois, E., and Tanguy, Y.: Open-source 3D tools developed for the CO3D mission and beyond, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-1311, https://doi.org/10.5194/egusphere-egu25-1311, 2025.

08:36–08:38
|
PICO4.4
|
EGU25-8703
|
ECS
|
On-site presentation
Juan Carbone, Jerimar Vasquez-Rojas, Alfredo Izquierdo, Javier Benavente, Jesús Gómez-Enri, Tomás Fernández-Montblanc, Flávio Martins, William D. Cabos Narvaez, Carlos J. González, Carlos Yagüe, Carlos Román-Cascón, and Oscar Alvarez

The OceanUCA project aims to develop an operational platform that enhances the existing tools created by the University of Cádiz, integrating new observational systems and high-resolution numerical models, both atmospheric and hydrodynamic, to achieve maximum resolution along the Andalusian coast (Spain). This platform utilizes open-source software and WebGIS services, enabling flexible and accessible geospatial data visualization and analysis for a wide range of stakeholders.

By enhancing computational resources with state-of-the-art open-source GIS tools, the platform provides solutions for addressing coastal environmental challenges such as oil spills, marine heatwaves, and extreme event tracking. The system integrates output from high-resolution oceanographic models, the atmospheric Weather Research and Forecasting (WRF) model, and observational data from various sources, including meteorological weather stations, surface fluxes instruments, oceanographic buoys, satellite imagery, high-frequency coastal radars, and data from oceanographic and atmospheric field campaigns. This data is presented through an interactive GIS interface, facilitating real-time model evaluation and decision-making.

The project also leverages high-performance computing and graphic processing units to enable the processing of large datasets, which improves the scalability and resolution of the forecasts. The platform aims to enhance environmental protection, support conservation efforts, and provide early-warning products that contribute to effective coastal management. Through its open-source approach, the system fosters collaboration and knowledge transfer with stakeholders at both national and international levels, enabling the wider community to access, engage with, and utilize coastal and atmospheric data for informed decision-making.

 

How to cite: Carbone, J., Vasquez-Rojas, J., Izquierdo, A., Benavente, J., Gómez-Enri, J., Fernández-Montblanc, T., Martins, F., Cabos Narvaez, W. D., González, C. J., Yagüe, C., Román-Cascón, C., and Alvarez, O.: OceanUCA: Enhancing Coastal Observation and Forecasting in Andalucía (Spain) through Data Visualisation and Communication, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-8703, https://doi.org/10.5194/egusphere-egu25-8703, 2025.

08:38–08:40
|
PICO4.5
|
EGU25-9616
|
On-site presentation
Carlos Peña, Razvan Aguridan, Xavier Yepes, and Mario Acosta

The continuous increase in resolution and complexity of Earth System Models (ESMs), aimed at improving the accuracy of simulations, significantly increases data handling demands. For this reason, scalable and efficient I/O solutions are critical to ensure that data storage, processing, and transfer do not become bottlenecks that hinder overall simulation throughput. Optimising this workflow is essential not only for improving performance but also for reducing the energy footprint of large-scale Earth system simulations.

Many ESMs, including the IFS-NEMO coupled model used in our case study within the Destination Earth Climate Digital Twin project, adopt a client-server I/O architecture to address these challenges. In this scheme, the model sends generated data to a server that handles complex post-processing tasks, such as interpolation, encoding, and data writing, while the model continues simulating the next time steps. However, this approach also requires continuous optimisation to ensure the overall efficiency of the output pipeline.

To achieve this, detailed I/O profiling was conducted, with a focus on the MultIO library, which manages the output pipeline, including ocean server creation and ocean data transport. On the client side, frequent and costly access to metadata was identified as a major source of I/O overhead, while on the server side, high interpolation times were observed, prompting further analysis and targeted optimisations that achieved a sixfold speedup in ocean interpolation. Additional pipeline actions were reviewed and optimised, contributing to a more efficient and scalable output workflow. Combined with tests to optimise server resource allocation, these efforts resulted in an overall efficiency improvement of up to 6.7% in simulation performance.

How to cite: Peña, C., Aguridan, R., Yepes, X., and Acosta, M.: I/O Profiling and Optimisation to Improve Energy Efficiency: Insights from the Climate Digital Twin project, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-9616, https://doi.org/10.5194/egusphere-egu25-9616, 2025.

08:40–08:50
|
PICO4.6
|
EGU25-12051
|
solicited
|
On-site presentation
Fran Raga, Mark Bentley, Daniela Coia, Ruben Docasal, Emmanuel Grotheer, David Heather, Tania Lim, Joana Oliveira, Jose Osinde, Thomas Cornet, Jaime Saiz, and Gemma Ramos

The European Space Agency (ESA) integrates open-source software to manage, visualize, and distribute planetary data, focusing on Mars and supporting global scientific collaboration through the Planetary Science Archive (PSA). This advanced infrastructure combines cutting-edge tools and technologies to enhance data accessibility and promote international research.

Tools and Technologies

  • OpenLayers: Powers 2D interactive maps, offering scientists an intuitive interface for exploring planetary geospatial data.
  • GeoServer: Shares spatial data via standard protocols like WMS, serving base maps and integrating advanced caching techniques.
  • Three.js: Facilitates 3D visualization of celestial objects, such as comets and asteroids, providing dynamic exploration capabilities.
  • PostgreSQL and PostGIS: Stores and manages complex geospatial datasets, enabling advanced spatial queries and integration with other GIS tools.

Collaborative Efforts

ESA contributes to open-source projects to solve unique planetary data challenges:

  • Astroquery: A Python library for accessing astronomical databases, ensuring efficient integration of planetary mission data into research workflows.
  • Antimeridian: Addresses geospatial data crossing the 180° longitude line, ensuring accurate planetary mapping.

PSA Interface and GIS Architecture

The PSA’s new interface integrates 2D and 3D visualizations, data filtering, and real-time access to information. Scientists can overlay geological, topographical, and spectral data layers, analyze specific regions, and download curated datasets. The GIS architecture combines:

  • GeoServer: Distributes optimized base maps of planetary surfaces.
  • OpenLayers and Three.js: Provides a seamless 2D and 3D visualization experience.
  • PostgreSQL/PostGIS: Manages and analyzes large geospatial datasets.
  • Astroquery and Antimeridian: Enhances data accessibility and accuracy.

Benefits for the Scientific Community

This open-source approach promotes transparent, collaborative research. Tools like Antimeridian address specific planetary data issues, enabling continuous and precise geospatial representation. The PSA allows researchers to cross-reference data from various missions and instruments efficiently, accelerating planetary science advancements.

In conclusion, ESA’s adoption of open-source tools and robust GIS architecture provides an accessible, powerful platform for planetary research, fostering innovation and collaboration across the global scientific community.

How to cite: Raga, F., Bentley, M., Coia, D., Docasal, R., Grotheer, E., Heather, D., Lim, T., Oliveira, J., Osinde, J., Cornet, T., Saiz, J., and Ramos, G.: Use of Open Source Software in the ESA Planetary Science Archive, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-12051, https://doi.org/10.5194/egusphere-egu25-12051, 2025.

08:50–08:52
|
PICO4.7
|
EGU25-15828
|
ECS
|
On-site presentation
Harikesh Singh, Li-minn Ang, and Sanjeev Kumar Srivastava

This study develops a Cellular Automaton (CA) model to predict forest fire spread in the Sunshine Coast region, utilizing diverse meteorological and environmental datasets. Key variables such as temperature, wind speed, rainfall, soil moisture, solar radiation, vegetation type, slope, elevation, and proximity to roads and streams are integrated to simulate fire dynamics with high spatial resolution. Historical fire occurrence data are used for model calibration and validation, ensuring accuracy in replicating fire spread patterns. The CA model operates through iterative cell-based transitions, governed by rules reflecting the complex interplay of environmental and meteorological factors. Results highlight the significant influence of wind, vegetation type, and topography on fire behaviour, with simulations effectively capturing spatial variability and spread dynamics. The findings underscore the model's potential as a robust, scalable tool for wildfire management, enabling data-driven planning for prescribed burns and risk mitigation. This research offers valuable insights into forest fire behaviour, contributing to sustainable ecosystem management and resilience planning in subtropical regions such as the southeast part of Queensland Australia.

How to cite: Singh, H., Ang, L., and Srivastava, S. K.: Modelling Forest Fire Spread in the SEQ Region Using Meteorological and Environmental Datasets: A Cellular Automaton Approach, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-15828, https://doi.org/10.5194/egusphere-egu25-15828, 2025.

08:52–08:54
|
PICO4.8
|
EGU25-17728
|
On-site presentation
Kirk Martinez, John Cupitt, Lovell Fuller, and Kleis Auke Wolthuizen

Libvips is an open source image processing library originally created for museum imaging projects through a range of EU-funded projects from 1989 onwards. The challenges of processing images which were much larger than available RAM as well as coping with multi-band multi-format pixels led to an extremely efficient software design. It also makes automatic use of multi-core CPUs. Today libvips is used by many websites due to its speed and low memory use (from Wikipedia to shopify and booking.com). It is OSS Fuzz tested by Google as it is classed as essential Internet software. It has been used in many museum imaging projects to stitch X-ray images and process massive scans (e.g. The battle of Murten 1.6 TerraPixel scan). Tiled pyramidal tiff images made for multi-resolution web-browsing are easily made and handled by libvips and its viewer vipsdisp. A spreadsheet-like GUI called nip is also useful for experimenting with image processing. These features make it a useful tool for processing images in the earth sciences, especially when sizes are larger than 32 GiB when most desktop or laptop computers struggle with typical software. Python can use the library (pyvips) which makes it easy to use with other tools but can also be used from C, C++, Ruby and Javascript.

How to cite: Martinez, K., Cupitt, J., Fuller, L., and Wolthuizen, K. A.: The libvips image processing library, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-17728, https://doi.org/10.5194/egusphere-egu25-17728, 2025.

08:54–08:56
|
PICO4.9
|
EGU25-18367
|
ECS
|
On-site presentation
Leonel Corado and Sérgio Godinho

Accurately monitoring terrestrial ecosystems is essential for addressing global environmental challenges, including deforestation, biodiversity loss, and climate change. NASA's Global Ecosystem Dynamics Investigation (GEDI) mission has revolutionized ecosystem monitoring by providing near-global, high-resolution data on vegetation structure and terrain elevation using spaceborne LiDAR. However, spaceborne LiDAR data often require correction due to various sources of error, such as instrument inaccuracies, atmospheric conditions (e.g., dense cloud cover), and spacecraft platform instability. A primary challenge in utilizing GEDI data is its horizontal geolocation error, which has an accuracy of approximately 10 meters for calibrated final products (Version 2). These errors, particularly in heterogeneous landscapes, can significantly compromise the accuracy of canopy height and terrain elevation estimates.

To address these challenges, the scientific community has developed methods to enhance GEDI’s geolocation accuracy. Notably, the GEDI Simulator tool, created by the GEDI Science Team, applies orbit-level systematic corrections using small-footprint ALS data. This approach assumes a uniform systematic error across the orbit and determines a single coordinate offset to correct horizontal deviations, which can often fail in complex and heterogeneous landscapes. Consequently, alternative methods, such as beam-level corrections (calculating an independent offset for each beam track) and footprint-level corrections (computing individual offsets for each footprint), have emerged. Despite their potential, these methods, including the GEDI Simulator, face practical limitations such as complexity, computational inefficiency, and a lack of user-friendly interfaces, restricting their broader adoption for remote sensing applications.

To overcome these limitations, we introduce GEDICorrect, an open-source Python framework for precise beam and/or footprint-level geolocation correction, designed with simplicity and accessibility in mind. GEDICorrect integrates multiple methods, criteria, and metrics, including waveform matching, terrain matching, and relative height (RH) profile matching, to achieve refined geolocation accuracy at the orbit, beam, or footprint levels. By leveraging advanced similarity metrics - such as Pearson and Spearman waveform correlations, Curve Root Sum Squared Differential Area (CRSSDA), and Kullback-Leibler divergence - GEDICorrect ensures precise alignment between GEDI measurements and simulated data.

Additionally, GEDICorrect incorporates parallel processing strategies using Python’s multiprocessing capabilities, enabling efficient handling of large-scale GEDI and ALS datasets. This scalability makes the framework practical for global-scale applications while maintaining accuracy and computational efficiency. By addressing critical barriers in geolocation correction with an open-source, user-friendly design, this framework enables a better assessment of canopy structure that can be applied to a wide range of fields, from advancing our understanding of carbon sequestration to supporting more informed planning and conservation efforts.

How to cite: Corado, L. and Godinho, S.: GEDICorrect: A Python Framework for GEDI Geolocation Correction Using Multiple Criteria and Parallel Processing Methods, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-18367, https://doi.org/10.5194/egusphere-egu25-18367, 2025.

08:56–08:58
|
PICO4.10
|
EGU25-18460
|
ECS
|
On-site presentation
Rommel Quintanilla, Alexey Medvedev, Xavier Yepes-Arbós, Razvan Aguridan, and Mario C. Acosta

The constant growth in computational demands of scientific applications, combined with energy efficiency requirements, makes GPU acceleration an important factor to consider in high-performance computing environments. Therefore, GPU porting has become essential for efficiently utilizing modern heterogeneous systems that currently provide both multicore CPUs and GPU-accelerated partitions.

While small, relatively new projects might be candidates for complete rewrites in low-level GPU languages like CUDA or HIP, this approach becomes impractical for larger and more complex codebases. Thus, a more convenient way is provided by the directive-based approach, which allows developers to maintain their original C++ or Fortran code while adding extra OpenACC/OpenMP directives to generate energy-efficient GPU code. 

However, this seemingly straightforward method often presents significant challenges. For instance, dealing with code that employs layouts that are not well-suited for GPU architectures, such as deeply nested loop structures or complex memory access patterns that result in suboptimal performance, might lead to the need to reorganize the initial code after all.

In this work, we present a systematic approach to performing the GPU code transition through compiler directives. This several-step incremental process seeks to reach a significant performance and energy consumption improvement while preserving code maintainability, portability, and output accuracy. We demonstrate the effectiveness of our approach through a detailed case study of our ongoing project porting the NEMO ocean model, which represents an interesting example of a complex scientific Fortran code with numerous common computational patterns. Finally, we discuss the experiences, limitations, and trade-offs encountered during this process, providing useful insights for other porting efforts that could face similar GPU migration challenges.

How to cite: Quintanilla, R., Medvedev, A., Yepes-Arbós, X., Aguridan, R., and Acosta, M. C.: A systematic methodology for directive-based GPU porting: NEMO ocean model as a case study, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-18460, https://doi.org/10.5194/egusphere-egu25-18460, 2025.

08:58–09:00
|
PICO4.11
|
EGU25-20297
|
ECS
|
On-site presentation
Vaibhav Tyagi and Saurabh Das

The Doppler Weather Radar (DWR) plays an important role in providing valuable 3D observations of precipitation systems. The advent of radar polarimetry enhances radar capabilities by providing detailed precipitation target characteristics like shape, size, etc. However, the complexities associated with radar data processing pose several challenges to its effective and widespread use. To address these challenges in radar data handling and analyzing, several open-source Python modules have been developed to facilitate radar data processing and analysis, such as WRADLIB, PYART, PYCWR, etc. One such value addition to these open-source tools is an in-house developed Python Indian weather radar toolkit (PYIWR). The toolkit incorporates standard procedures for processing and visualizing polarimetric weather radar data, making it easier for radar users to work with raw radar data, mitigating various challenges due to different data structures and formats. The present work focuses on integrating a novel ground clutter mitigation algorithm developed into the PYIWR framework. The algorithm leverages the statistical properties of long-term radar observations to identify persistent ground clutter using a probabilistic clutter map. It has been extensively tested and evaluated using long-term data from the C-band Doppler Weather Radar at the Thumba Equatorial Rocket Launching Station (TERLS) in Thiruvananthapuram, Kerala, India, spanning 2017 to 2024. Quantitative evaluation of the clutter removal ratio demonstrates that the proposed technique outperforms existing methods, like standalone Gabella filter and fuzzy logic approaches, in mitigating persistent ground clutter, especially in complex terrain. The integration of this newly developed algorithm into the PYIWR framework significantly enhances its capabilities for radar data quality control, making it a more robust and effective tool for the radar user community.

How to cite: Tyagi, V. and Das, S.: Integration of Clutter Mitigation Algorithm into PYIWR Framework: A Python Toolkit for Analyzing Weather Radar Data, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-20297, https://doi.org/10.5194/egusphere-egu25-20297, 2025.

09:00–09:02
|
PICO4.12
|
EGU25-20575
|
On-site presentation
Hamed Vaseghnia, Nestor Fernando Cardozo Diaz, and Enrico Riccardi

GeoSight is an open-source Python package developed for geospatial analysis, processing, and visualization.

GeoSight’s development has been shaped by ongoing collaboration between lecturers, students, and industry professionals. Students have contributed, using and improving the package while learning Python programming, object-oriented design, and addressing real-world geoscience challenges.  

With a focus on accessible, structured and modular programming, the package is user-friendly, flexible, and easily extendable. It is distributed under a public license to encourage collaboration and widespread use.

Key features of GeoSight include 2D and 3D geospatial data visualization, contour and slope analysis, noise filtering, and the integration of satellite imagery into terrain models. Advanced capabilities, such as machine learning for classification and geological computations such as strike and dip measurements, might serve geoscientists, engineers, and urban planners. For an efficient processing, GeoSight supports GPU/CPU architectures.

We here present GeoSight features, benefits, and limitations, discussing its modular design, its support for student learning, and adaptability to changing geospatial standards. We plan in keep expanding its analytical tools, its cross-platform compatibility, and adding further features such as real-time visualization and multiple data-source integration

How to cite: Vaseghnia, H., Fernando Cardozo Diaz, N., and Riccardi, E.: GeoSight: An Open-Source library for Geospatial Analysis and Visualization, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-20575, https://doi.org/10.5194/egusphere-egu25-20575, 2025.

09:02–09:04
|
PICO4.13
|
EGU25-21310
|
On-site presentation
Tamrat Belayneh

Recent advances in remote sensing and computer vision have reshaped how geospatial data is captured, visualized, and analyzed. Drones equipped with high-resolution cameras and sensors enable rapid, detailed surveys from multiple angles, providing near-real-time insights into complex environments. While photogrammetry-based workflows yield accurate 3D meshes, they often demand substantial computational power and lengthy processing times.

Emerging techniques such as Gaussian splatting and Neural Radiance Fields (NeRFs) offer compelling alternatives to traditional mesh-based methods. Gaussian splatting represents 3D scenes as point-based “splats” with mathematical distributions, enabling faster, photorealistic rendering. NeRFs employ neural networks to generate volumetric reconstructions from sparse image inputs, capturing intricate lighting and geometry with minimal manual intervention. Together, these methods reduce post-processing complexity and enhance visual fidelity.

In this session, we demonstrate novel applications of Gaussian splats and NeRFs within ArcGIS and discuss how these approaches can integrate with familiar mesh-based workflows. We also explore ways to extend existing 3D streaming standards, such as OGC’s Indexed 3D Scene Layers (I3S), to incorporate these emerging data types. Finally, we showcase real-world examples demonstrating how blending Gaussian splats and conventional meshes enables richly detailed, interactive visualizations at multiple scales. This convergence promises more efficient collaboration, cost-effective workflows, and deeper insights into rapidly evolving built and natural environments.

How to cite: Belayneh, T.: Next-Generation Geospatial Visualization: From Traditional Meshes to Gaussian Splats and NeRFs, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-21310, https://doi.org/10.5194/egusphere-egu25-21310, 2025.

09:04–10:15