EGU26-6977, updated on 14 Mar 2026
https://doi.org/10.5194/egusphere-egu26-6977
EGU General Assembly 2026
© Author(s) 2026. This work is distributed under
the Creative Commons Attribution 4.0 License.
Oral | Monday, 04 May, 15:35–15:45 (CEST)
 
Room -2.92
LuNeRF: How Neural Radiance Fields Can Advance Very High Resolution Lunar Terrain Reconstruction
Chloé Thenoz1, Dawa Derksen2, Jean-Christophe Malapert2, and Frédéric Schmidt3
Chloé Thenoz et al.
  • 1Magellium, Earth Observation, Ramonville-Saint-Agne, France (chloe.thenoz@magellium.fr)
  • 2Centre National d’Etudes Spatiales (CNES), Toulouse, France, ({dawa.derksen, jean-christophe.malapert}@cnes.fr)
  • 3Université Paris-Saclay, CNRS, GEOPS, Orsay, France (frederic.schmidt@universite-paris-saclay.fr)

Modeling the lunar terrain is a key challenge for lunar missions, having impact on mission planning, resource planning and the establishment of sustainable human bases on the Moon. Thanks to the Lunar Reconnaissance Orbiter (LRO) and its Narrow Acquisition Cameras (NACs) acquiring images at a spatial resolution of 50cm, a large collection of images are now available. Despite this, automatically generating digital elevation models (DEMs) on the Moon remains a challenge. Classic methods like multi-view stereovision or photoinclinometry struggle with lunar specificities such as the large shadows and the permanently shadowed regions (PSRs) and the absence of atmosphere, the complex lighting conditions and the homogeneity of the lunar surface texture.

In 2020, a new self-supervised neural-network-based method called Neural Radiance Fields (NeRF) was introduced and demonstrated outstanding 3D reconstruction capacities from multi-view images. Recent advancements adapted the methodology to the challenging field of satellite imagery of the Earth and exhibited competing or even better results than classic methodologies. Some recent works tried to transfer to the Moon but either constrained their studies to simulated data or rather reused existing models.

In this work, we explore the potential of NeRF to learn the 3D shape of the lunar surface at a very high resolution from LRO NACs data, supported by a coarse estimation of the ground given by processed data from LRO’s altimetric sensor called the Lunar Orbiter Laser Altimetry (LOLA). Our main contributions are the generation of a LRO NeRF-ready dataset on a Moon South Pole region that we intend to openly share and the development of a specific model coined LuNeRF. We demonstrate that, with an adapted radiance modeling, LuNeRF can recover the geometry of small craters, as well as perform novel view synthesis and relighting tasks.

How to cite: Thenoz, C., Derksen, D., Malapert, J.-C., and Schmidt, F.: LuNeRF: How Neural Radiance Fields Can Advance Very High Resolution Lunar Terrain Reconstruction, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-6977, https://doi.org/10.5194/egusphere-egu26-6977, 2026.