MITM5 | Artificial Intelligence and Machine Learning in Planetary Science

MITM5

Artificial Intelligence and Machine Learning in Planetary Science
Co-organized by OPS/SB/EXOA
Conveners: Valerio Carruba, Evgeny Smirnov | Co-conveners: Rita C. Domingos, Gabriel Caritá, Abreuçon Alves, Mariela Huaman, Bryce Bolin, Sara Di Ruzza, Mario D'Amore, Othon Winter, Safwan Aljbaae, Rafael Sfair, Wesley Fraser
Orals THU-OB2
| Thu, 11 Sep, 09:30–10:30 (EEST)
 
Room Neptune (rooms 22+23)
Orals THU-OB3
| Thu, 11 Sep, 11:00–12:30 (EEST)
 
Room Neptune (rooms 22+23)
Posters THU-POS
| Attendance Thu, 11 Sep, 18:00–19:30 (EEST) | Display Thu, 11 Sep, 08:30–19:30
 
Finlandia Hall foyer, F80–93
Thu, 09:30
Thu, 11:00
Thu, 18:00
Artificial intelligence (AI) refers to the development of computer software capable of performing tasks that would typically require human intelligence. Machine learning (ML) is a branch of computer science that explores algorithms that can learn from data. ML is primarily divided into supervised and unsupervised learning. In the former, the algorithm is presented with examples of labeled examples, and a training routine is executed to learn a general rule that maps inputs to outputs. In the latter, no label is provided to the learning algorithm, which enables the network to autonomously identify latent and representative structures in the data. Deep learning is a branch of machine learning based on multiple layers of artificial neural networks, which are computing systems inspired by the biological neural networks found in animal brains. This session aims to provide a forum for discussing recent advancements in the applications of AI and ML to planetary science.

Session assets

Orals THU-OB2: Thu, 11 Sep, 09:30–10:30 | Room Neptune (rooms 22+23)

Chairpersons: Valerio Carruba, Wesley Fraser
AI and ML working groups
09:30–09:42
|
EPSC-DPS2025-1815
|
On-site presentation
Stavro Lambrov Ivanovski, Nimisha Verma, Yray Hatipoğlu, Marianna Angrisani, Arif Solmaz, Evgeny Smirnov, Valerio Carruba, Devanshi Kacholia, Dagmara Oszkiewicz, and Mario D'Amore and the Europlanet Machine Learning Working Group

The rapid advances in machine learning (ML) present unprecedented opportunities for planetary science. We have established a dedicated working group (WG) focused on the application of ML in this field to harness these technological advancements, address complex scientific questions, and enhance our understanding of planetary systems.

The Europlanet Machine Learning Working Group held  its kick-off meeting during the EPSC 2024 in Berlin, September 2024. The discussion focused on launching the group for exchanging ideas and opportunities with people within and outside of Europlanet’s membership for the first year of its launch. Some of the main goals established were to create a knowledge-sharing platform for members to share their research and invite collaboration, form sub-groups within the WG to expand on current research focus, and foster new collaborative research opportunities within or outside of Europlanet with new funding.

As of May 2025, the Europlanet Machine Learning Working Group has 30 members. Better still, the group has so far been able to attract both senior and early career members.

The WG will build upon the achievements of the Europlanet RI project, which has addressed a broad range of ML applications across planetary research. The new group will delve deeper into specialized areas and foster collaboration and knowledge exchange. This targeted approach will enable the development of tailored ML solutions, drive innovation, and accelerate scientific discoveries.

Bridging the gap between ML and planetary science, the WG will position academic institutions and industry stakeholders at the forefront of cutting-edge research. The WG will develop

  • ML methods and tools for planetary surface and subsurface mapping, mineralogy, geomorphology, and geology;
  • apply ML techniques to planetary atmospheres, climates, and weather systems;
  • study the formation and evolution of planetary systems, exoplanets, and astrobiology;
  • create ML frameworks and platforms for data integration, fusion, visualization, and dissemination.
  • Large Language Models (e.g., ChatGPT) will be utilized as tools for ML in planetary science.

Figure 1. Europlanet Machine Learning Working Group web page (https://www.europlanet.org/services/europlanet-machine-learning-working-group/)

The Machine Learning WG initiated regular monthly meetings on the third Wednesday of each month from January 2025 onwards, where members of the WG got an opportunity to present their current or published work followed by a Q/A session. On the Europlanet website(Fig. 1), you can see the scheduled meetings and speakers. We will highlight some of the talks given by our members. Membership in the WG requires being a member of Europlanet. Benefits include participating in high-impact, state-of-the-art ML science, sharing ML tools and facilities on the Europlanet ML Portal, developing collaborations, participating in future Europlanet EC-funded ML proposals, and accessing Europlanet ML training, career development, and professional services.

How to cite: Ivanovski, S. L., Verma, N., Hatipoğlu, Y., Angrisani, M., Solmaz, A., Smirnov, E., Carruba, V., Kacholia, D., Oszkiewicz, D., and D'Amore, M. and the Europlanet Machine Learning Working Group:  Europlanet Machine Learning Working Group: a year of progress, EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-1815, https://doi.org/10.5194/epsc-dps2025-1815, 2025.

AI and ML for small Solar System bodies I
09:42–09:54
|
EPSC-DPS2025-1465
|
ECP
|
On-site presentation
Elisa Maria Alessi and Federico Cortese

The aim of this work is to present possible unsupervised machine learning methods, borrowed from the finance world, that can be applied to classify dynamical transitions appearing in the co-orbital motion.

Co-orbital dynamics appears in the three-body problem, and is widely studied to analyze asteroidal behaviors, but also to design trajectories for interplanetary missions.  It can involve complex transitions that can be challenging to analyze manually due to large dataset, typically of planetary science, but also due to the role that different perturbations can play in the orbital evolution of real asteroids.

The method presented is the so-called statistical Sparse Jump Model (SJM) [1] and two novel improvements. The different formulations will be applied to medium-term time series of real asteroids and to long-term time series of simulated lunar ejecta, derived for [2]. The main orbital elements considered are the semi-major axis a, the resonant angle θ and the argument of pericenter ω. The focus will be to distinguish horseshoe (HS), quasi-satellite (QS), tadpole (TP) and compound (CP) behaviors in an automatic way and to provide meaningful metrics of the time permanence in a given regime.

The results on the behavior of lunar ejecta will be important in the context of the possible origin of important Earth's companions, like Kama'olewa or minimoons. 

More details on the formulations implemented are given below.

Sparse Jump Model

The SJM takes as input a Tx P data matrix, where each row consists of given features of the system (function of a, θ and ω) at a given time t
The model produces three main outputs:

  •  A sequence of latent states s = (s_1, ..., s_T), where each s_t represents a co-orbital regime (e.g., QS or HS).
  • A set of centroids μ== (μ_1, ..., μ_K), with μ_k representing the most representative values for the state k.
  • A feature importance vector w= (w_1, ..., w_P), where each w_p indicates the contribution of feature p to the system’s dynamics.

This is done optimizing an objective function with respect to centroids and latent states and depending on input data.
For details on model formulation and estimation, see [1,3]


Figure 1, adapted from [4], illustrates the effectiveness of the SJM in identifying the orbital regime of a time series corresponding to lunar ejecta exhibiting a co-orbital behavior outside the Hill's sphere of the Earth [2]. As it can be seen, the case poses significant classification challenges, but the SJM delivers robust qualitative results.

Fuzzy Jump Model

A key limitation of the SJM is its reliance on hard clustering. To address this, we propose a novel extension - the Fuzzy Jump Model (fuzzy JM) - which introduces soft clustering capabilities into the SJM framework.

Our method incorporates a tunable fuzziness parameter that allows smooth transitions between hard and soft clustering. Inspired by the fuzzy c-means algorithm [5], we generalize the SJM to estimate time-varying state probabilities through numerical constrained optimization. To this end, the objective function is modified to take into account these probabilities.

Figure 2 illustrates the time-varying probabilities for the QS regime for the asteroid 164207 Cardea that transition between HS and QS phases.  
During transition phases, the probability of switching from HS to QS evolves gradually, showing the ability of the model to anticipate transitions. 

Robust Sparse Jump Model

A second limitation of the SJM is that feature relevance is assumed to be uniform across all states, that is, a variable selection is performed independently of the state classification.
To overcome this, we propose merging the jump model with the Clustering Objects on Subsets of Attributes (COSA) framework by [6]. In this novel framework, referred to as robust SJM, we  estimate both the sequence of latent states and a state-specific feature weight matrix, where each entry quantifies the importance of a given feature within a given state.

These weights are found through a closed form formula. This formulation allows for feature selection within each state and guarantees convergence to a local optimum when the initial weights are uniform. 

Acknowledgement:  This work has been funded by the Italian Space Agency through the agreement n. 2024-6-HH.0, CUP n. F43C23000340001, entitled “Supporto scientifico alla missione LUMIO”.

References

[1] Nystrup, P., Lindstrom, E., Madsen, H. (2020). Expert Systems with Applications 150 , 113307

[2] Jedicke, R.,  et al. (2025). Icarus 438, 116587

[3] Nystrup, P., Kolm, P.N., Lindstrom, E. (2021).  Expert Systems with Applications, 184 , 115558

[4] Cortese, F.P., Di Ruzza, S., Alessi, E.M. (2025). Nonlinear Dynamics, doi: 10.1007/s11071-025-11171-7

[5] Bezdek, J.C. (1981). Pattern recognition with fuzzy objective function algorithms. Springer Science & Business Media

[6] Friedman, J.H., & Meulman, J.J. (2004). Journal of the Royal Statistical Society Series B: Statistical Methodology 66 (4), 815–849

How to cite: Alessi, E. M. and Cortese, F.: A class of statistical jump models for the classification of dynamical transitions in the co-orbital regime, EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-1465, https://doi.org/10.5194/epsc-dps2025-1465, 2025.

09:54–10:06
|
EPSC-DPS2025-1073
|
ECP
|
On-site presentation
Preeti Cowan, Ian Bond, Wesley Fraser, Samatha Lawler, and Nicholas Rattenbury and the The MOA and CLASSY Collaborations

As astronomical surveys evolve to capture ever-larger volumes of data, innovative computational tools are increasingly critical for extracting meaningful signals from petabyte-scale datasets. Deep learning – machine learning algorithms that involve artificial neural networks – offers one such tool. Here, we present convolutional neural network-based approaches to enhance the discovery and recovery of asteroids in survey data. Our research utilizes two very different datasets: two decades of archival crowded field data from the Microlensing Observations in Astrophysics (MOA) survey and the ongoing Classical and Large A Solar System (CLASSY) trans-Neptunian objects survey.

Though designed to detect microlensing events in the Galactic Bulge and Magellanic Clouds, the MOA survey has incidentally observed several thousand asteroids in two decades of high-cadence imaging data. However, the extremely dense star fields posed a significant challenge to effectively identifying moving sources. To address this, we developed a novel approach that leverages the sky motion of asteroids in consecutive exposures to reveal its ‘tracklet’ – the linear motion path that highlights the asteroids’ movement against the static stellar background (Figure 1). These tracklets formed the basis of our labelled datasets of known asteroids, which we used to train several custom-designed convolutional neural networks (CNNs). We then ensembled the predictions from the best performing models to maximize accuracy and generalization, achieving a recall of 97.67%. In addition, we trained the YOLOv4 object detector to precisely localize asteroid tracklets, achieving a mean Average Precision (mAP) of 90.97%. We are now deploying these trained models across the full MOA data archive to identify both known and previously undetected asteroids – transforming the archival data into a powerful tool for asteroid discovery.

 

In parallel, we applied these deep learning techniques to the CLASSY survey, a Canada France Hawaii Telescope (CFHT) Large Program focused on finding distant TNOs. We labelled over ~75,000 composite images from nightly MegaCam observations, creating a training dataset that spans a variety of asteroid populations, including near-Earth objects, main belt asteroids, centaurs, as well as both real and simulated fast-moving TNOs. Our custom CNNs successfully detected tracklets across these diverse sources, and we once again combined the models to enhance predictive performance and minimize false negatives, achieving a recall of 98.15%. The labelling process highlighted the exceptional depth and clarity of the CLASSY observations as well as the effectiveness of the tracklet approach to identify a diverse range of solar system objects. We are now focusing our efforts on recovering centaurs – which are difficult to isolate because of the vast region they inhabit –  from the observations.   

While our work with CLASSY offers a framework for applying deep learning to future surveys like the Legacy Survey of Space and Time (LSST), the MOA archive uniquely demonstrates the untapped potential of archival microlensing datasets. Our results demonstrate the effectiveness of building targeted training datasets and applying model ensembling to maximize discovery. Together, these strategies offer a practical blueprint for integrating artificial intelligence into the data pipelines of future surveys, ensuring that the scientific potential of next-generation observatories is fully realized.

How to cite: Cowan, P., Bond, I., Fraser, W., Lawler, S., and Rattenbury, N. and the The MOA and CLASSY Collaborations: Towards asteroid discovery with deep learning in large datasets, EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-1073, https://doi.org/10.5194/epsc-dps2025-1073, 2025.

10:06–10:18
|
EPSC-DPS2025-1603
|
ECP
|
On-site presentation
Andrew Marshall-Lee, Apostolos Christou, Alexander Sivitilli, and Alice Humpage
 

Asteroid families are groups of asteroids that have formed from the fragments of a larger ‘parent’ body that has been disrupted. When viewed in a space with the axis of proper orbital elements: semi-major axis (a), eccentricity (e), and inclination (i), families appear as relatively compact clusters within a background of unrelated asteroids. Halos are dense shells of asteroids that surround these families and tend to have similar properties to the core family yet are not considered as members. Isolating these halo structures and predicting their family membership remains and ongoing challenge. We present the results of using artificial neural networks (ANNs) aiming to solve this.  

In this work we focused on C-type families in the inner-main belt, between 2.1 and 2.5 au. C-type families are notably dark, with reflectance less than 5-10% and are distinctive from the backgrounds they are situated in. This is useful for verifying the predictions of the ANNs. The Erigone family was focused on as it has a well-documented halo (Carruba 2016), is separate from the other larger structures of the Vesta and Flora families, is comparatively young, estimated between 200-250 Myr (Spoto 2015), and is numerous, with ~2000 bodies having measured albedos. It therefore served as the ideal testbed to expand upon for more varied C-type families.  

The network structure we used are multi-layer perceptrons (MLPs). The input neurons used include the three orbital elements with the metric by Zappala (1990), their standard deviations, and absolute difference in albedo from the parent, h-magnitude, and inverse diameter. The network was kept to 2 hidden-layers and to prevent overfitting. Asteroids associated with the 8 C-type families were labelled with a single neuron output layer as 1, and other background asteroid not within this volume were labelled 0. The final architecture of the network was 9:27:3:1. The models were trained using values from the MP3C database hosted by the Observatoire Côte d’Azur (https://mp3c.oca.eu/). To isolate the family and halo complex, a volume of ±5 standard deviations in the family's centre in proper orbital space was defined. Once the network was trained, the halo asteroids in the box were then introduced as ‘unseen’ data to be evaluated by the network. Additionally, family members were re-introduced as unknown control values. We verified that the network was working as intended by making sure the network was able to correctly categorise known family members.  

Our network was able to recover 99.83% of the Erigone family with a predicted certainty of >0.9. 581 asteroids from the halo were predicted by the ANN to be family members, an increase of 32%. The ANNs predictions were pruned manually, where any asteroids with an albedo greater than 0.1 and further than 3 std in (a, e, i) away from the centre were discarded. The experiment was repeated with 5 other C-type families, and large portions of additional asteroids were predicted to be family members: 84 Kilo (+43%), 329 Svea (+42%), 623 Chimaera (+43%), 752 Sulamatis (+25%). These experiments reinforce the notion that asteroid family halos contain a significant number of family asteroids that have been missed for inclusion in the family. We find that ANNs are good complements to the surveying done previously with algorithms such as the Hierarchical Clustering Method (HCM). Additionally, we make the case that they will be useful tools for the expected massive influx of new asteroids that will come with the Vera Rubin Observatory’s LSST. 

 

 

References: 

V. Carruba, S. Aljbaae, O. C. Winter, On the Erigone family and the z2 secular resonance, Monthly Notices of the Royal Astronomical Society, Volume 455, Issue 3, 21 January 2016, Pages 2279–2288, https://doi.org/10.1093/mnras/stv2430 

Spoto, F., Milani, A., Knežević, Z., 2015. Asteroid family ages. Icarus 257, 275–289. URL: http://dx.doi.org/10.1016/j.icarus.2015.04.041, doi:10.1016/j.icarus.2015.04.041 

Zappala, Vincenzo & Cellino, Alberto & Farinella, Paolo & Knezevic, Zoran. (1990). Asteroid families. I. Identification by hierarchical clustering and reliability assessment. The Astronomical Journal. 100. 2030-2046. 10.1086/115658. 

 

How to cite: Marshall-Lee, A., Christou, A., Sivitilli, A., and Humpage, A.: Predicting the origins of C-type family halo asteroids using ANNs , EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-1603, https://doi.org/10.5194/epsc-dps2025-1603, 2025.

10:18–10:30
|
EPSC-DPS2025-706
|
On-site presentation
Benedikt Aigner, Fabian Dallinger, Thomas Andert, Benjamin Haser, Martin Pätzold, and Matthias Hahn

Abstract

The field of Space Situational Awareness (SSA) has become increasingly important in recent years due to the rapid rise in active satellites and the accumulation of space debris in Earth orbit. Accurate orbit determination (OD) and, more importantly, reliable estimates of uncertainty are essential for planning collision avoidance manoeuvres and preserving a safe orbital environment. Over time, machine learning (ML) has also seen increasing use in this area, as its algorithms hold the potential to improve classical OD methods by leveraging measurement data.

Scorsoglio et al. (2023) demonstrated that a specialized type of neural network, known as a Physics-Informed Extreme Learning Machine (PIELM), can perform rapid orbit determination without requiring an initial guess of the state vector. By incorporating the governing differential equations, PIELMs reduce the “black box” nature typically associated with standard neural networks. However, estimating realistic prediction uncertainties remains an open challenge for nonlinear systems, particularly in contexts where Bayesian approaches cannot be directly applied.

In this study, we investigate and compare uncertainty quantification methods in orbit determination by analysing the behaviour of the covariance matrix across different estimation frameworks. Specifically, we examine the classical covariance propagation using the state transition matrix as used in the weighted least squares (WLS) method, a Monte Carlo simulation-based approach employing a standard orbital propagator, and strategies to assess the uncertainty associated with OD results obtained via a PIELM. The comparative analysis aims to assess the fidelity and characteristics of uncertainty estimates produced by each method. All computations are carried out within the AI4POD (Artificial Intelligence for Precise Orbit Determination) framework.

Acknowledgements

The project Artificial Intelligence for Precise Orbit Determination (AI4POD) is funded by Deutsches Zentrum für Luft- und Raumfahrt, Bonn-Oberkassel, under grant 50LZ2308.

References

[1] Montenbruck, Oliver, und Eberhard Gill. Satellite Orbits. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. https://doi.org/10.1007/978-3-642-58351-3.

[2] Scorsoglio, A., Ghilardi, L. & Furfaro, R. A Physic-Informed Neural Network Approach to Orbit Determination. J Astronaut Sci 70, 25 (2023). https://doi.org/10.1007/s40295-023-00392-w

[3] Liu, Xu, Wen Yao, Wei Peng, und Weien Zhou. „Bayesian Physics-Informed Extreme Learning Machine for Forward and Inverse PDE Problems with Noisy Data“. Neurocomputing 549 (September 2023): 126425. https://doi.org/10.1016/j.neucom.2023.126425.

[4]  Aigner, B., Dallinger, F., Andert, T., and Pätzold, M.: Integrating Machine Learning algorithms into Orbit Determination: The AI4POD Framework, Europlanet Science Congress 2024, Berlin, Germany, 8–13 Sep 2024, EPSC2024-521, https://doi.org/10.5194/epsc2024-521, 2024.

[5]  Dallinger, F., Aigner, B., Andert, T., and Pätzold, M.: Physics Informed Neural Networks as addition to classical Precise Orbit Determination, Europlanet Science Congress 2024, Berlin, Germany, 8–13 Sep 2024, EPSC2024-514, https://doi.org/10.5194/epsc2024-514, 2024. 

How to cite: Aigner, B., Dallinger, F., Andert, T., Haser, B., Pätzold, M., and Hahn, M.: Uncertainty Estimation in Orbit Determination: A Comparison of Machine Learning, Monte Carlo and Least Squares Approaches, EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-706, https://doi.org/10.5194/epsc-dps2025-706, 2025.

Orals THU-OB3: Thu, 11 Sep, 11:00–12:30 | Room Neptune (rooms 22+23)

Chairpersons: Bryce Bolin, Othon Winter
11:00–11:12
|
EPSC-DPS2025-64
|
Virtual presentation
Jorge Martinez Palomera, Brian Powell, Amy Tuson, and Christina Hedges

The Transiting Exoplanet Survey Satellite (TESS) is a NASA mission focused on exploring and finding exoplanets around nearby stars using the transiting method. The TESS telescope covers a large field of view of 96 sq. deg in a single exposure. It has four cameras arranged vertically pointing from the ecliptic plane toward the poles. During its first two years of observations, TESS saved full-frame images (FFI) with a 30-minute cadence during about 27 days covering one celestial hemisphere per year. 

Thanks to this configuration and observing schedule, TESS is able to observe asteroids with a high duty cycle (see Figure 1 for an example of observed asteroid projected tracks in a typical TESS field). Current techniques to search for asteroid signals on images rely on the shift-and-stack method, which relies on testing all possible combinations of directions and speeds an object can move across the image to maximize the detection signal and find the asteroid’s track. This method is computationally expensive, and only attainable when the parameter space (direction-velocity) is constrained, usually to the most common direction of motion(e.g. orbits parallel to the ecliptic plane) and common speeds (e.g. main belt asteroids). This introduces a bias against fast-moving asteroids and high-inclination orbits (projected tracks perpendicular to the eclliptic plane). 

To solve this, we implemented a rotationally invariant neural network (NN) model that performs semantic segmentation to find moving objects in TESS FFIs. This NN has an architecture that uses a W-Net architecture (two 3D U-Nets stacked) with skip connections that output a 3D segmentation mask with asteroid detections. Figure 2 shows details of the W-Net architecture. We constructed a custom training set using 64x64x64 cubes of pixel flux time series and truth masks with the tracks of known asteroids from the JPL Horizon Ephemeris system. During training, these cubes are randomly rotated and flipped to enforce rotational invariance. Our NN model can find known and new asteroids with all kinds of track orientations, showing no bias against objects moving at high inclination orbits, or fast-moving asteroids, or tracks with a change in direction. Figure 3 shows that our NN model detects ~90% of known asteroids down to apparent visual magnitude 20th and has a detection limiting magnitude of ~21. This is on par with current implementations of the shift-and-stack method but without the bias introduced by limiting the range of track direction and velocity.

This machine learning model present an orthogonal method to search for peculiar solar system objects, such as Trans Neptunian Objects or Near Earth Objects, and provides a complimentary approach to current methods. Additionally,  is directly applicable to other all-sky survey like observations, such as the Galactic Bulge Time Domain Survey of the upcoming Roman Space Telescope. 

In this talk, we will introduce the NN mode, training set construction and details,l and present results from predictions using years 1 and 2 of TESS data. Additionally, we will show preliminary light curves extracted from new asteroids detected by our model.

Figure 1: TESS Sector 6 FFIs with known observed asteroids brighter than V<22, tracks were obtained from the JPL Horizon system. TESS has 4 cameras with 4 CCD each stacked vertically with respect to the ecliptic plane (left) and the celestial pole (right).

Figure 2: W-Net architecture of the neural network model used to identify moving objects in TESS FFI cubes. The network as an Adaptive Normalization layer particularly developed for this data, inputs flux cubes of size 64, and outputs an asteroid prediction probability cube of the same shape.

Figure 3: Asteroid detection fraction as a function of object brightness for multiple predicted probability thresholds. Lower thresholds improves detection fraction reaching 90% at magnitude V=20 for a threshold value of 0.5.

How to cite: Martinez Palomera, J., Powell, B., Tuson, A., and Hedges, C.: AI-enabled Asteroid detection in TESS data , EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-64, https://doi.org/10.5194/epsc-dps2025-64, 2025.

11:12–11:24
|
EPSC-DPS2025-1392
|
On-site presentation
Kennedy Farrell and Chadwick Trujillo

Active asteroids—hybrid objects that exhibit characteristics of both asteroids and comets—provide unique insights into solar system evolution and the current distribution of volatiles. However, their apparent rarity (~60 known), coupled with the petabyte-sized haystacks of archival survey data in which they may be hidden, makes detections challenging. This work explores the application of machine learning to constrain populations of active small bodies by increasing the rate of data evaluation. Specifically, we employ convolutional neural networks (CNNs) for object detection and classification, tailored to mirror the workflow of Citizen Scientist volunteers who classify objects of interest from thumbnails of archival data.

Trained on a dataset of labeled active small bodies and a large control set of inactive objects, our CNN evaluates image cutouts centered on known small bodies and classifies them based on signs of activity, such as tails or comae. We explore training datasets from both the Canada-France-Hawaii Telescope (CFHT) MegaCam and Subaru Hyper Suprime-Cam (HSC) archives, highlighting the adaptability of the CNN to be retrained and applied to different survey datasets. Our work also evaluates the influence of activity rate among training data in comparison to the predicted activity rate among Main Belt asteroids (1:10,000) and includes a robust response to erroneous thumbnail images.

We find that the CNN demonstrates high precision and recall across both CFHT and HSC archives and present our first results of the CNN’s high-confidence detections, whether previously known or not. This work illustrates the potential of machine learning techniques to accelerate discoveries of active small bodies and is intentionally designed to be used alongside proven Citizen Science applications. Combining AI with by-eye evaluation gives us a powerful and versatile tool in the doorway to next-generation surveys like LSST.

How to cite: Farrell, K. and Trujillo, C.: Needles in a Haystack: Harnessing Machine Learning and Citizen Science to Catch Small Body Activity in Action, EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-1392, https://doi.org/10.5194/epsc-dps2025-1392, 2025.

11:24–11:39
|
EPSC-DPS2025-961
|
ECP
|
On-site presentation
Nitya Pandey

The trans-Neptunian region represents a critical window into the early stages of our solar system’s formation, offering a unique opportunity to study the remnants of the planetesimals that contributed to the creation of the planets. The Deep Ecliptic Exploration Project (DEEP), a multi-year survey utilizing the Dark Energy Camera (DECam) on the 4-meter Cerro Tololo Inter-American Observatory (CTIO) Blanco telescope, has been instrumental in characterizing the faint Trans-Neptunian Objects (TNOs) population. This project has determined the size and shape distribution of TNOs, studied their physical properties in relation to their dynamical class and size, and tracked objects over multiple years to gain insights into their orbits. Using a shift-and-stack moving object detection algorithm, the DEEP survey successfully recovered over 110 new objects. While this method has been successful, it relies on computationally expensive velocity assumptions and traditional image stacking techniques. In this work, we introduce an innovative AI-based moving object detection method that offers a fresh perspective on TNO detection, providing a faster, more efficient, and robust alternative to traditional methods.

We introduce a new approach for detecting moving objects in astronomical images, called You Only Stack Once (YOSO). This method simplifies the traditional object detection technique by eliminating the need to account for the velocity vector of each object. Instead of shifting individual images to align with a presumed motion before stacking, YOSO simply stacks a sequence of time-series images without applying directional shifts. As a result, moving sources appear as linear or slightly curved trails, depending on their apparent motion during the observation. These trails are then identified using a machine learning model trained to recognize similar distinct shape and intensity profiles. This approach allows for fast, reliable detection of a wide range of moving objects, from fast Near-Earth Objects (NEOs) to the slower, fainter bodies in the Kuiper Belt.

YOSO starts with a series of short-exposure images of the same sky region, typically acquired over the course of a night. After standard preprocessing (see Steps 2 and 3 in Figure 1), the images are stacked to boost the signal of any moving sources. In early versions of the pipeline, we used pixel-wise statistical measures such as mean, skewness, and kurtosis to combine the frames, but these proved limited in their ability to enhance faint sources. To improve sensitivity, we developed the Gaussian Motion Matched Filter (GMMF), a new statistics, tailored to detect the footprint of a moving object on a single pixel. Unlike conventional Gaussian smoothing, GMMF applies a Gaussian-weighted convolution along the temporal axis of each pixel stack, matching the expected motion profile of moving sources as shown in section 2 in Figure 2. GMMF can reliably detect sources with signal-to-noise ratios as low as 0.5:1.0, all without the need for a brute-force search over velocity space. The details of this filter have been shown in Figure 2. 

Once potential trails have been generated, we use a deep learning model to identify and classify them. Our detection network is based on YOLOv8L, a high-performing convolutional neural network (CNN) architecture that has been pretrained on a broad set of visual data. YOLOv8L is particularly well-suited to this task because it generalizes effectively even with limited domain-specific training data, learns quickly, and handles noisy or low-contrast features robustly. For our use case, we retrained the model on a synthetic dataset designed to mimic real astronomical trails, including low SNR signals, linear trajectories, and varying brightness levels. 

YOSO is a highly adaptable framework designed to optimize the detection of moving objects across a broad spectrum of telescopes, observational datasets, and populations, from Kuiper Belt Objects (KBOs) to fast-moving Near-Earth Objects (NEOs). Unlike shift-and-stack techniques that produce compact point sources, often vulnerable to false positives from random noise or disjointed detections due to tiling strategies, YOSO leverages the naturally occurring, spatially correlated trails left by moving sources. These extended structures are particularly well-suited for detection via machine learning models, which excel at identifying such coherent patterns.

A key advantage of YOSO is its ability to significantly suppress false positives. The deep learning model is trained to recognize elongated, linear features, enabling it to discriminate real object trails from stochastic noise or unrelated pixel artifacts. Whereas traditional surveys often require stringent signal-to-noise thresholds (e.g., above 5σ) to maintain reliability, YOSO is capable of operating at lower thresholds down to 4σ by combining statistical image stacking with deep learning-based confidence assessments. Once a candidate trail is identified, the model outputs a bounding box around the detection (as illustrated in Step 5 of Figure 1), which can then be used to extract the trail and estimate its apparent motion and direction. 

The primary goal of this method is to achieve near real-time detection of moving objects as observational data becomes available. This presentation will outline the core methodologies behind the YOSO framework and highlight key challenges encountered during its development, particularly emphasizing the role of the trail-based search in enabling fast and effective detection of trans-Neptunian objects. I will also discuss its potential application to LSST deep drilling fields and other wide-field surveys, where it may significantly improve the discovery rate of Solar System objects, especially those with sky motions that are not well-suited to traditional deep drilling strategies.

 

Figure 1: The illustration of the YOSO pipeline, the process has been applied to 103 time series of observations of Kuiper Belt Object search (120 sec exposure each image). 

Figure 2: Schematic illustrating the Gaussian Motion Matched Filter (GMMF) process applied to 103 time series of observations of Kuiper Belt Object search (120 sec exposure each image). 

How to cite: Pandey, N.: You Only Stack Once (YOSO): Fast TNO Detection via Gaussian Motion Matched Filtering and Deep Learning, EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-961, https://doi.org/10.5194/epsc-dps2025-961, 2025.

AI and ML in Planetary Science
11:39–11:51
|
EPSC-DPS2025-469
|
On-site presentation
Petr Pokorny, Erwan M. Mazarico, Mark S. Robinson, Prasun Mahanti, and Jean-Pierre Willaims

Until recently, permanently shadowed regions (PSRs) on the Moon were devoid of high-resolution high-signal-to-noise imaging. ShadowCam, the NASA-funded instrument onboard the Korea Aerospace Research Institute (KARI) Korea Pathfinder Lunar Orbiter (KPLO) satellite brings 200x more sensitive images with 1.7 m per pixel resolution.

ShadowCam’s growing and publicly available dataset contains millions of previously unknown impact craters that provide important insight into various physical processes such as the impact gardening, volatile excavation, or mass wasting in lunar PSRs. In this presentation, we describe our crater detection techniques and the current state of our crater detection efforts on the ShadowCam image dataset.

With the advent of machine learning, we can now efficiently process the high-resolution lunar images. Machine learning crater detections have been successfully applied on numerous planetary bodies, e.g. the Moon [1,2,3], Mars [4,5], and Ceres [6,7]. In our work, we use the YOLOv8 (You-only-look-once) object detection framework designed to provide high speed and accuracy for detection of various objects in images [8]. To minimize the complexity of our object detection model, we limit to only one class: “crater”, where all other features on each image are considered a part of the background. The architecture of our neural network is based on the YOLOv8m model with 25.9 million parameters with the default resolution of our detection model of 512x512 pixels. The detection model was trained using 5240 impact craters from various LROC-NAC images. The training dataset of our model was enhanced by employing several image augmentations such as rotation, contrast and brightness variations provided by the albumentations library to increase the robustness of our crater detection algorithm (CDA).

The ShadowCam images used for crater detection were orthorectified and geolocated by the ShadowCam team. These images have various sizes reaching up to 100,000 pixels in the x or y dimension. To efficiently accommodate images of different sizes we slice each image into tiles with 50% overlap. To detect craters of various sizes, we use 6 different image slice sizes: 256x256, 512x512, 1024x1024, 2048x2048, 4096x4096, and 8192x8192 pixels, where all these slices are rescaled to 512x512 pixels before our CDA is applied. To remove duplicate detections, we apply the non-maximum suppression algorithm (NMS) with the Intersection over Union (IoU) metric, where the IoU threshold is set to 0.3. NMS ensures that only detections with the highest confidence values are kept, and the IoU threshold allows us to keep nested impact craters. Ultimately, the CDA results in geolocated bounding boxes and confidence values for each detected impact crater (Figure 1).

We deployed our ML-based CDA on 22,256 images from the PDS ShadowCam dataset, which corresponds to 2.2 TB of image data covering approximately 5.3 million km2 of the lunar surface. The processing time for the entire dataset was approximately 3000 GPU hours on a Nvidia A100 GPU. In this dataset, we find 1,013,440,231 impact craters larger than 16 meters in diameter (8 pixels or larger). The average detection time per crater is approximately 0.3 microseconds, which is 6 orders of magnitude faster than human crater detections [9]. The images in the ShadowCam dataset overlap spatially, and therefore many craters are detected multiple times at different times and illumination conditions.

We tested the performance of our ML-based CDA on a dataset of 50,000 craters selected from different ShadowCam images and vetted by four human researchers. We find that our CDA has a true positive detection rate of 98.2 and that 1.8% are false positive detections for craters with diameters between 16 meters and 4 km. Additionally, ~1% of impact craters are not detected by our CDA. Smaller craters are also detected by our CDA, but the detection confidence values are lower and the consensus of the true/false detection varies significantly between different researchers for these small craters. Larger craters (>1 km in diameter) are already contained in the Robbins 2019 global crater database [10] and are therefore not the target of our analysis. 

Our detection method will be applied to all future ShadowCam images as well as higher level data products such as controlled mosaics. We will also improve the ability of our detection algorithm to perform better for: impact craters in low-SNR regions, degraded craters, and morphologically complex craters. We are also planning to conduct a large-scale impact crater detection vetting with the help of citizen scientists. Ultimately, we will train a lightweight version of our detection algorithm for  real-time detection of impact craters on a wider range of devices (e.g., web browsers, etc.)

Acknowledgments: We thank the KPLO and ShadowCam operations and science teams for acquiring the ShadowCam dataset. ShadowCam PDS https://pds.shadowcam.asu.edu/ was used in this work. PP was supported by the NASA Planetary Science Division Research Program through the GSFC Planetary Geodesy ISFM and the award number 80GSFC24M0006.

Figure 1. A ShadowCam full-resolution segment of the Faustini crater located at (x = 82.0  1.4 km, y = 5.0  0.7 km).  This image contains 1164 impact craters with diameters from 16 m to 1.2 km. While our crater detection algorithm provides bounding boxes for each crater, we display each detection as an ellipse with the rotation angle equal to zero for better clarity. We find 98.2% of our detections are true positives, while 1.8% are false detections. Additionally, ~1% of craters remained undetected. Note, that craters with bounding boxes extending outside this image are not displayed but are still detected by our CDA.

References: [1] Benedix G. K et al. (2020) Earth and Space Science, 7, 3, e01005, [2] Fairweather J. H et al. (2023) Earth and Space Science, 10, 7, e2023EA002865, [3] La Grassa R. et al. (2023) Remote Sensing, 15, 5 1171, [4] Lagain et al. (2021) Nature Communications, 12, 6352, [5] Lagain et al. (2022) Nature Communications, 13, 3782, [6] Latorre F. et al. (2023) Icarus, 394, 115434, [7] Herrera C. et al. (2024) Astronomy & Astrophysics, 688, A176, [8] Jocher G. et al. (2023) https://github.com/ultralytics/ultralytics, [9] Robbins S. J. et al. (2014) Icarus, 234, 109-131,[10] Robbins S. J. (2019) Journal of Geophysical Research: Planets, 124, 4 871-892

How to cite: Pokorny, P., Mazarico, E. M., Robinson, M. S., Mahanti, P., and Willaims, J.-P.: Machine Learning Driven Detection of 1 Billion+ Lunar Impact Craters in Permanently Shadowed Regions Using ShadowCam Data, EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-469, https://doi.org/10.5194/epsc-dps2025-469, 2025.

11:51–12:03
|
EPSC-DPS2025-500
|
ECP
|
On-site presentation
Giulio Quaglia, Valery Lainey, Guillaume Tochon, and Du Toit Strauss

Recent advancements in artificial intelligence (AI) have opened new horizons for space exploration, especially in astrometry. In this work, we developed a deep learning-based algorithm to detect and classify bright sources—namely stars, satellites, and cosmic rays—in the Cassini Imaging Science Subsystem (ISS) images of the Saturn system. This novel approach leverages the precision of deep neural networks to process over 13,000 images, 1024×1024 pixels and with an exposure time <1 second, forming a robust dataset for training.

To generate labeled data without manual intervention, we designed a custom source detection algorithm using classical image processing techniques, such as mathematical morphology. Detected sources were then matched with star catalogs and ephemerides of Saturn’s moons to label stars and satellites; unmatched sources were classified as cosmic rays. The resulting dataset was used to train a YOLO (You Only Look Once) model—the state-of-the-art framework for detection and classification of objects on images and videos, that gained its popularity for its speed and accuracy. 

The network achieved strong classification results: cosmic rays were identified with 90% average precision and no false positives. Satellites were accurately classified 83% of the time, while stars proved more challenging due to their variability, achieving a 54% classification rate with 43% being misclassified as cosmic rays. 

Beyond detection, we used the classified data to study cosmic ray behavior in Saturn’s outer magnetosphere (15–100 Rs). Temporal variations were correlated with neutron monitor data from Earth, offering a broader view of cosmic ray activity in the solar system. Furthermore, analysis of the energy and directional characteristics of these particles demonstrates that ISS NAC images can be effectively repurposed for particle science. This AI-driven framework provides a new tool for exploring the Saturn system and could aid in the discovery of previously undetected moons or energetic events.

How to cite: Quaglia, G., Lainey, V., Tochon, G., and Strauss, D. T.: A deep learning based method for the detection and classification of bright sources on Cassini ISS images, EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-500, https://doi.org/10.5194/epsc-dps2025-500, 2025.

12:03–12:15
|
EPSC-DPS2025-387
|
On-site presentation
Gerhard Paar, Christoph Traxler, Alexander Jones, Thomas Ortner, Jim Bell, Sanjeev Gupta, and Robert Barnes

Introduction

Planetary rover camera stereoscopy has evolved within decades to serve engineering and science tasks from on-board navigation over mission and instrument planning to science operations and exploitation based on 3D data [1]. 

Mastcam-Z [2] is a zoomable multispectral stereoscopic camera on the Mars 2020 Perseverance rover. Its stereo range useful for geologic spatial investigations spans from very close range up until a few dozens of meters, with a quadratically increasing range error [3]. Whilst small deviations of geometric camera calibration from “truth” result in a predictable and correctable systematic error, actual stereo processing introduces 3D range noise in the named  quadratically increasing form.

Recently available AI methods show promising solutions determining monoscopic depth (i.e. depth from single images in pixel resolution) [4] for a large range of applications. Such range maps lack true scale, as no artificial objects with known scale can be expected on planetary surfaces. Yet, pixel-resolution occlusion determination and micro-shape are well represented, as shown in Figure 1. This inspired the combination of monoscopic AI-based range determination with true scale as available from calibrated stereoscopy.

AI-Based Enhancement of Stereoscopic Range Products

Monocular depth models such as Depth Anything V2 (DA-V2, a deep learning-based system for estimating depth from a single camera image [4]) have emerged to powerful tools in image simulation, segmentation and scene understanding, however do not provide exact scale. Relative depth estimates produced by such monocular depth estimation need to be turned into physically meaningful metric depth values. Calibrated stereo camera configurations such as Mastcam-Z provide true-scale depth maps. Fusing them works by establishing local correspondences between the two depth maps and fitting range transformation functions that can convert pixel values from the monocular (unscaled) domain into the stereo (scaled for true distances) domain. These transforms are calculated and applied locally, either per pixel or per block of pixels, to account for spatial variations in the relationship between the two depth sources. Edge-aware processing ensures that depth discontinuities are handled more precisely on occlusions (such as rocks or boulders) to select search areas for which transformations are calculated. The result is a scaled depth map that combines the high-frequency detail and spatial completeness of DA-V2 with the geometric accuracy of stereo photogrammetry, enabling detailed yet quantitatively accurate depth maps suitable for scientific analysis and engineering applications, such as planetary rover navigation and terrain modeling.

The basic operation of the algorithm is as follows:

  • Detect occlusion edges in mono depth map to form a binary "edge map"
  • Iterate over mono depth map in blocks of configurable size
  • For each mono depth map pixel, create a circular search area using the pixel as its center, taking into account occlusion edges
  • Calculate correspondences between mono and stereo map depth values within the search area (Figure 2)
  • Fit function to correspondence data using quadratic, polynomial, exponential, or linear approximation
  • Transform mono map values in current pixel block to final product using fitted function.

Several enhancements are implemented  – most notably a two-step function fitting procedure and per-pixel interpolation of transformations based on spatial distance – as well as more fine-grained options to control the behavior of the algorithm. A prototype version of this scaled DA-V2 algorithm has been integrated into the nominal Mastcam-Z 3D PRoViP vision processing [6].

 Qualitative Results

A series of preliminary tests indicate highly promising performance, leading to a substantial range extension of the nominal fixed-baseline stereo capabilities of stereoscopic imaging instruments, in particular for the planetary science case. From examples (see Figure 3) it is evident, that at least a range boost of 2 to 5 can be expected by using the described approach.

A typical use case to both the original (conventional) stereo-derived 3D result and the AI-enhanced version in comparison (Figure 4) gains the following preliminary assessment:

  • Prior to AI processing, the data from this area shows several “spikes” which preclude reliable structural measurements (e.g. strike/dip of veins in the foreground block) being acquired. In the case above (at 34 mm focal length), this occurs at distances of just ~ 6 – 8 m from Mastcam-Z.
  • Following AI enhancement, these “spikes” are removed, allowing more reliable measurements of Ca-sulfate vein orientations to be measured.

Towards Validation of AI-Based Stereo Enhancement

A highly comprehensible way for a qualitative comparison is to toggle between the surfaces to be compared. This allows to study structural deviations at different viewpoints. Changing into wireframe mode can further enhance the perception of topographic differences. 

A series of quantitative validation approaches is under development. These include:

  • 3D-comparison between AI-enhanced surface reconstruction using stereo pairs viewed from medium distance, with “conventional” stereo reconstructions of the same scene viewed from close-range (including co-registration to minimize systematic errors from localization and camera calibration) – see Figure 5, top and middle
  • Color-coding the reference surface according to the distance to the other one using the distance between surface points by using the up-vector or surface normals (Figure 5, bottom)
  • 3D-comparison of AI-enhanced surface reconstructions from images taken with the same imaging geometry under different illumination conditions
  • Independent geologic analysis on AI-enhanced and conventional Digital Outcrop Models (DOMs) and results comparison of, e.g., dip-and-strike measurements
  • Analysis of shadow shapes with obtained object outlines.

Outlook

A statistically significant validation of the approach is presently in development and planned to be finalized before the joint EPSC/DPS Conference 2025.

A planned improvement of the visual comparison method between surfaces is to show color-coded deviation vectors between vertices. This works best in wireframe mode or with semi-transparent surfaces.  Furthermore it should be possible to query numerical deviation values by clicking on a surface point.

Acknowledgement

This work was funded by the ASAP Project AI-Mars-3D (FFG-911920). We thank Komyo Furuya of JR for operational end-to-end implementation and documentation of the re-scaling algorithm.

References

How to cite: Paar, G., Traxler, C., Jones, A., Ortner, T., Bell, J., Gupta, S., and Barnes, R.: AI-Based Extension of Rover Camera Stereo Range – Starting Validation on Mars 2020 Mastcam-Z Geologic Use Cases, EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-387, https://doi.org/10.5194/epsc-dps2025-387, 2025.

12:15–12:30
|
EPSC-DPS2025-294
|
ECP
|
On-site presentation
Leo Martinez, François Andrieu, Frédéric Schmidt, and Mark S. Bentley

 

Introduction 

Impact craters are essential markers for reconstructing the geological history of planetary surfaces [1]. On Mars, where no absolute radiometric dating has yet been conducted in-situ, the density of craters remains the main chronometer used for dating surface units [2, 3]. However, this method critically depends on the correct identification of primary craters, as secondary craters (formed by ejecta from a primary impact) and ghost craters (highly degraded

or buried) must be excluded to avoid significant overestimations of surface ages [4]. As the identification of crater morphological features is still a long, repetitive, and subjective task when performed manually, the application of modern computer vision techniques has become more and more relevant. While automated crater detection has seen substantial progress in recent years thanks to deep learning and computer vision techniques [5, 6, 7], the classification of craters based on their morphology remains largely unexplored. Yet, such classification is essential to ensure both the validity of crater inventories and the robustness of derived age estimates.

 

Dataset and Preprocessing 

To train our classifier, we relied on the comprehensive work of Lagain et al. (2021) [4], which provides a manually annotated catalogue of more than 376,000 craters with a size superior at 1km in diameter into four morphological classes: Regular, Secondary, Ghost, and Layered. Image patches centered on each crater are extracted from the global CTX mosaic [8], after reprojection in local stereographic coordinates to preserve the circular geometry of craters at high latitudes. To ensure robustness, we refine the crater locations and sizes using a circle detection algorithm based on the Hough transform [9]. This preprocessing step significantly improves the alignment between craters and image content, a critical requirement for effective supervised learning. In order to train our model, we used 72,000 classified craters, divided in train (28,000 crater), validation (6,000 craters) and test (45,000 craters).

 

Methodology 

We trained a convolutional neural network classifier based on the YOLOv11 architecture, using a balanced and augmented subset of the crater database. Each image patch is resized and normalized, and we apply standard data augmentation strategies including rotations, flips, and artificial masking to simulate realistic artefacts in CTX images. The model outputs is a classification among the four crater classes describe previously. raining was conducted over 40 epochs on a high-performance multi-GPU server using a cross-entropy loss function and a cosine decayed learning rate schedule. Figure1 show the improvement of accuracy through the learning phase on the validation dataset.

Figure1: Validation accuracy with respect to learning epochs

 

Results 

The final model achieves a classification accuracy of over 80% on a geographically diverse and independent test subdataset containing over 45,000 craters. The Figure2 shows the confusion matrix which gaves us a good insight as how the classification model performed. Performance remains consistent across latitudes. Figure3 shows the classification made on 12 example craters, showing excellent classification, including robustness to illumination conditions and image condition (corrupted data). 

We also demonstrate the practical use of our classification model in the context of surface dating. By comparing cumulative crater size-frequency distributions (CSFD) before and after removing ghost and secondary craters, we show that automated filtering improves the coherence of the inferred ages with those expected from established crater chronologies.

Figure2: Confusion matrix made on the test subdataset. These results show for instance that 80% of true Ghost crater—which represent 2703 craters—where correctly classified, and 2% of them (84 instances) where misclassified as Layered. Overall, the performance is excellent. The regular crater appears slightly more difficult to classify, most probably due to human misclassification.

 

 

Figure3:  Example of 12 crater present in a test area between -100° and -92°E longitude and 0° to 8°S latitude, which were, from top to bottom, classified as Ghost, Layered and Regular.

 

Discussion and Conclusion

We present a novel, scalable, and accurate pipeline for automatic crater classification, which complements existing detection models and provides a new tool for planetary surface dating. 

This study represents the first fully automated morphological classification of Martian impact craters using deep learning. Our results demonstrate the potential of AI-based approaches to improve crater-based chronostratigraphy, especially when applied systematically to global datasets.

As a future work, we plan to extend the model to the Moon and Mercury using transfer learning, but also incorporate additional crater classes or features (e.g., central peaks, double-layer ejecta). Finally, the plan to refine existing Martian chronologies using the filtered crater populations. 

 

References

[1] W. K. Hartmann, G. Neukum, Cratering chronology and the evolution of

mars, Space Science Reviews 96 (2001) 165–194.

[2] G. Neukum, B. Ivanov, W. Hartmann, Cratering records in the inner solar system in relation to the lunar reference system (2001).

[3] B. A. Ivanov, Mars/moon cratering rate ratio estimates, Chronology and Evolution of Mars 87, 2001.

[4] A. Lagain, S. Bouley, & al., Mars crater and database: A and participative project for the classification of and the morphological characteristics of large martian and craters, The Geological, 2021.

[5] G. K. Benedix, A. Lagain, K. Chai, S. Meka, S. Anderson, C. Norman, P. A. Bland, J. Paxman, M. C. Towner, T. Tan, Deriving surface ages on mars using automated crater counting, 2020.

[6] R. La Grassa, G. Cremonese, I. Gallo, C. Re, E. Martellato, Yololens: A deep learning model based on super-resolution to enhance the crater detection of the planetary surfaces, 2023.

[7] L. Martinez, F. Andrieu, F. Schmidt, H. Talbot, M. S. Bentley, Robust automatic crater detection at all latitudes on mars with deep-learning, 2025.

[8] J. L. Dickson, B. L. Ehlmann, L. Kerber, C. I. Fassett, The global context camera (ctx) mosaic of mars: A product of information-preserving image data processing, 2024.

[9] L. Martinez, F. Andrieu, F. Schmidt, M. S. Bentley, Automatic crater classification using a deep-learning-based pipeline, JGR Machine Learning, under review, 2025.

How to cite: Martinez, L., Andrieu, F., Schmidt, F., and Bentley, M. S.: Automatic classification of Martian impact craters using deep learning: a new tool to improve planetary surface dating, EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-294, https://doi.org/10.5194/epsc-dps2025-294, 2025.

Posters: Thu, 11 Sep, 18:00–19:30 | Finlandia Hall foyer

Display time: Thu, 11 Sep, 08:30–19:30
Chairperson: Valerio Carruba
AI and ML working groups
F80
|
EPSC-DPS2025-1467
|
ECP
|
On-site presentation
Devanshi Kacholia, Nimisha Verma, Mario D’Amore, Marianna Angrisani, Alessandro Frigeri, Frédéric Schmidt, Valerio Carruba, Y. Güray Hatipoğlu, Maarten Roos-Serote, Evgeny Smirnov, Natalia Amanda Vergara Sassarini, Arif Solmaz, Dagmara Oszkiewicz, and Stavro Ivanovski

Introduction

Artificial Intelligence (AI) is one of the most influential fields of the 21st century (Zhang et al., 2021). Rich, E (2019) candidly described it as “the study of how to make computers do things which, at the moment, people do better”, today AI often surpasses human ability in tasks like large-scale data mining and pattern recognition - its true strength. AI’s subfields - Machine Learning (ML) and deep learning (DL), play a critical role in expanding the usage to a vast variety of fields like planetary science, astronomy,  earth observations, and remote sensing, just to name a few. There is an expected inclination towards incorporating AI more frequently in the studies of planetary science given the vast and complex nature of planetary data. In fact, AI has already been instrumental in extracting meaningful insights and advancing research in both interplanetary and astronomical studies.

In planetary sciences, several AI techniques have been employed in order to bridge gaps in our understanding of the varied patterns and occurrences for studying the natural features observable from the data returned by scientific payloads. For example, PCA and cluster analysis can help in detecting patterns of compositional variation from multi and hyper-spectral imagery (Moussaoui et al., 2008; D’Amore & Padovan, 2022). Furthermore, to study specific features and patterns in their occurrences, correlations with neighbouring features; unsupervised algorithms and more complex -supervised techniques can be helpful depending on the scale of the task. 

From simple methods of unsupervised learning like clustering used to study the spectral signatures of Jezero crater on Mars (Pletl et al., 2023) to applying large language models to track asteroids affected by gravitational effects which alter the asteroid’s orbit (Carruba et al., 2025), such applications highlight the prospects of AI in the field of planetary science. Henceforth, to develop a deeper understanding of the potential and applications of ML, below is a typical AI workflow.

Typical AI workflow

A typical workflow for an AI model involves an initial step of selecting a model  suitable for your goals (Figure 2). Data format, quality of the data, static or dynamic features of interest, etc can influence the choice of AI model or techniques. Data preparation steps, like normalizing the data i.e. scaling the data from [-1,1] values, prevents dominance of any one feature in the data and stabilizes the model training process. 

Furthermore, parameters or hyperparameters are selected depending on the complexity of the model. While more complex models; deep neural networks or Vision Transformers will need hyperparameter adjustment to maximize performance, simpler models mainly rely on predefined weights or fixed rules. Likewise, a model architecture shall be established as per the data and targets. One example is the usage of the Faster-R-CNN - a robust and high accuracy yielding model which can be employed to train on high-resolution labelled images to perform object-detection tasks like identifying craters. 

In scientific use cases, the workflow often encompasses actual data that must be separated into a training set, validation set (to optimize the hyper-parameters) and a test set, completely independent from the training. To evaluate “how well” the model has learnt from the training dataset, accuracy, precision, recall, F1-score, and intersection-over-union (IoU) are the most popular statistics. Subsequently, model predictions can help in developing an understanding of the potential areas for fine-tuning and refining the model for the use case. Henceforth, fine-tuning the model is another crucial step.

Figure 2: A typical AI workflow

Potential of AI

A successful application of ML in planetary science can be driven by a collaboration of scientists and ML experts. Scientists (astronomers, geologists, planetary scientists etc.) are arguably more equipped to answer science-based questions like what can be called a crater and conversely, ML experts may be more adept at assessing data preparation techniques to eradicate noise. The field of planetary sciences encompasses themes like anomaly detection, simulation and surface modeling, atmospheric studies, gravitational behavior and its effects on planets and smaller bodies, instrumentation and spacecraft design etc. which necessitates such collaborations for the optimum result. In recent years, Large Language Models (LLMs) have had a significant paradigm shift in AI applications due to their understanding of patterns acquired through their vast pre-training phase. For time series analysis, image classification, and pattern identification tasks common in planetary sciences, LLMs can significantly streamline workflows by reducing the need for specialized preprocessing steps.

Given the enormous data from missions and observational surveys, and the numerous applications of planetary sciences, it is the need of the hour to produce workflows that not only automates but helps in an objective/standard decision making for problem statements of planetary sciences. The Europlanet Machine Learning Working Group does exactly this by sharing the latest techniques, tools, and applications and opens doors for people who want to apply these robust techniques.

References

  • Rich, E. (2019). Artificial Intelligence 3E (Sie) (Vol. 63, No. 4). Tata McGraw-Hill Education.
  • Moussaoui, S. et.al. (2008) On the decomposition of Mars hyperspectral data by ICA and Bayesian positive source separation Neurocomputing for Vision Research; Advances in Blind Signal Processing, 71, 2194-2208, http://dx.doi.org/10.1016/j.neucom.2007.07.034 
  • Carruba, V. et al. (2025). Vision Transformers for identifying asteroids interacting with secular resonances. Icarus, 425, 116346. https://doi.org/10.1016/j.icarus.2024.116346
  • D’Amore, M., & Padovan, S. (2022). Chapter 7 Automated surface mapping via unsupervised learning and classification of Mercury Visible–Near-Infrared reflectance spectra. In J. Helbert, M. D’Amore, M. Aye, & H. Kerner (Eds.), Machine Learning for Planetary Science (pp. 131–149). Elsevier. https://doi.org/10.1016/B978-0-12-818721-0.00016-1
  • Pletl, A. et.al. (2023). Spectral Clustering of CRISM Datasets in Jezero Crater Using UMAP and k-Means. Remote Sensing, 15(4), Article 4. /10.3390/rs15040939
  • Zhang, D. et al. (2021). The AI Index 2021 Annual Report (No. arXiv:2103.06312). arXiv. /10.48550/arXiv.2103.06312

How to cite: Kacholia, D., Verma, N., D’Amore, M., Angrisani, M., Frigeri, A., Schmidt, F., Carruba, V., Hatipoğlu, Y. G., Roos-Serote, M., Smirnov, E., Sassarini, N. A. V., Solmaz, A., Oszkiewicz, D., and Ivanovski, S.: Artificial Intelligence in Planetary Science and Astronomy: Applications and Research Potential, EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-1467, https://doi.org/10.5194/epsc-dps2025-1467, 2025.

AI and ML for small Solar System bodies
F81
|
EPSC-DPS2025-226
|
Virtual presentation
Evgeny Smirnov
Recent advances in large language models (LLMs) have opened new possibilities for astronomical data analysis and classification tasks. While multimodal models such as GPT-4o/4.1, Claude 3.5/3.7, and others have demonstrated remarkable capabilities in processing both text and images, their application to astronomy has been limited by substantial operational costs. This work presents a comprehensive evaluation of open-source LLMs, including LLaMA 3.2, Gemma, and DeepSeek, for astronomical data classification tasks, with particular emphasis on mean-motion resonance identification in asteroid dynamics.
In this research, it is demonstrated that open-source models can achieve performance acceptable to the given problem and outperform traditional neural network while significantly reducing operational costs. This approach leverages the inherent pattern recognition capabilities of LLMs to analyze time series data and astronomical images, tasks that traditionally require specialized algorithms and extensive computational resources. Building upon previous work that established the viability of multimodal LLMs for resonance identification, it is shown that careful prompt engineering and model instructing and fine-tuning can yield acceptable accuracy rates even with freely available models that can be launched on a researcher's laptop.
Furthermore, a structured framework for developing standardized benchmarks for astronomical tasks using LLMs is introduced. This framework includes: (1) systematic dataset curation protocols, (2) evaluation metrics adjusted to some astronomical applications, (3) cross-model performance comparison methodologies, and (4) guidelines for prompt engineering technics. These benchmarks enable reproducible performance assessment across different LLM architectures and can facilitate the identification of cost-effective solutions for specific astronomical problems.

How to cite: Smirnov, E.: Open-source large language models in astronomical data classification: applications and benchmarking, EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-226, https://doi.org/10.5194/epsc-dps2025-226, 2025.

F82
|
EPSC-DPS2025-58
|
On-site presentation
Valerio Carruba, Safwan Aljbaae, Evgeny Smirnov, and Gabriel Caritá

Asteroid families are groups of asteroids formed by a collision or fission event.  Some asteroid families interact with secular resonances. Because of planetary perturbations, the pericenter and nodes of planets and asteroids precess with frequencies g and s. When the precession frequency of the asteroid (g) is close to that of Saturn (g6), or g-g6≈0, the ν6 resonance occurs.

Figure (1): The location of main secular resonances in the (a, sin(i)) domain.

Contrary to mean-motion resonances, secular resonances cannot be easily identified in a proper elements' 2-D domains. To identify if an asteroid is in a secular resonance, we need to investigate the time behavior of its resonant argument. With over 8 million asteroids predicted to be discovered by the Vera C. Rubin Observatory, traditional visual analysis of arguments will no longer be feasible.

Figure (2): Examples of resonant arguments for asteroids circulating, alternating phases of circulation and libration, and in libration states.

The first deep learning approach for identifying asteroids interacting with secular resonance was introduced in Carruba et al. (2021), with a multi-layer perceptron model. This is a five-step process:

1. We integrate the asteroid orbits under the gravitational influences of all planets.

2. We compute the time series of the resonant argument.

3. Images of these time series are obtained for each asteroid.

4. The model is trained on a set of labeled image data.

5. The model predicts the labels for a set of test images.

Carruba et al. (2022) applied Convolutional Neural Networks (CNN) for the classification of large databases of images and regularization techniques for correcting overfitting. In Carruba et al. (2024), digitally filtered images of resonant arguments were used to enhance the performance of CNNs. Finally, in Carruba et al. (2025), there was the first application of Vision Transformers.

Convolutional neural networks, or CNNs, are a neural network model originally designed to work with two-dimensional picture data. Their name derives from the convolutional layer. Convolution is a linear procedure involving the multiplication of a two-dimensional array of weights with an input array. The result of applying the filter is a two-dimensional array: the feature map.  Three of the most commonly used CNN models are the VGG (Simonyan & Zisserman 2014), Inception (Szegedy et al. 2015), and the Residual Network, or ResNet (He et al. 2015). 

Figure (3): An example of the application of Vision Transformers to the analysis of an image.

The Vision Transformer architecture for classifying images of resonant arguments was first applied in Carruba et al. (2025). The ViT model is based on the Transformer architecture (Vaswani et al. 2017), and it applies the Transformer architecture directly to image data, without the need for CNNs. In the ViT approach, an input image is split into fixed-size patches, usually 1/10 of the image size, which are then linearly embedded and fed into the Transformer encoder. The Transformer encoder consists of a series of Transformer blocks, which are made of two parts:

a. Self-Attention Mechanism: This allows the model to weigh the importance of different images for ViT, in a sequence relative to each other, enabling it to capture contextual relationships regardless of their distance in the input sequence.

b. Feed-Forward Neural Network: After the self-attention step, the output is passed through a feed-forward network, which applies transformations to the data independently for each position in the sequence.

Multiple transformer blocks can be stacked to form a complete Transformer model, allowing it to capture long-range dependencies and global information within the image.  Two key hyperparameters in our model are:

1. num_layers: The number of Transformer blocks.

2. num_heads: The number of attention heads in the Multi-Head Attention layer.

We applied CNNs and ViT to three publicly available databases of images of resonant arguments for the ν6 (Carruba et al. 2022), g − 2g6 + g5 (Carruba et al. 2024a), and s − s6 − g5 + g6 (Carruba et al. 2024b).

Figure (4): Semi-logarithmic plot of the computational time for applying different methods of image classification.

The models' performance was superior when applied to images of filtered resonant arguments. ViT models outperformed CNNs in terms of running times (10 times faster!) and evaluation metrics, and their results are comparable to those of models produced by the new LLM approach of Smirnov (2024).

References

Carruba V., Aljbaae S., Domingos R. C., Barletta W., 2021, Artificial Neural Network classification of asteroids in the M1:2 mean-motion resonance with Mars, MNRAS, 504, 692.

V. Carruba, S. Aljbaae , G. Carita, R. C. Domingos, B. Martins, 2022, Optimization of Artificial Neural Networks models applied to the identification of images of asteroids' resonant arguments, CMDA, 134, A59.

V. Carruba, S. Aljbaae, R. C. Domingos, G. Carita, A. Alves, E. M. D. S. Delfino, Digitally filtered resonant arguments for deep learning classification of asteroids in secular resonances, 2024, MNRAS, 531, 4432-4443.

V. Carruba, S. Aljbaae, E. Smirnov, G. Carita, 2025, Vision Transformers for identifying asteroids interacting with secular resonances, Icarus, 425C 116346.

E. Smirnov, 2024, Fast, Simple, and Accurate Time Series Analysis with Large Language Models: An Example of Mean-motion Resonances Identication. ApJ , 966(2), 220.to asteroid resonant dynamics.

K. Simonyan and A. Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. ArXiv e-prints , page arXiv:1409.1556, September 2014.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computervision and pattern recognition , pages 1#9, 2015.

K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition, 2015.

A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need. arXiv preprint arXiv:1706.03762, 2017.

How to cite: Carruba, V., Aljbaae, S., Smirnov, E., and Caritá, G.: Vision Transformers for identifying asteroids interacting with secular resonances., EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-58, https://doi.org/10.5194/epsc-dps2025-58, 2025.

F83
|
EPSC-DPS2025-149
|
ECP
|
On-site presentation
Vanessa Vichi and Giacomo Tommei

The solar system is home to a diverse population of small celestial objects, including asteroids, comets, and meteoroids. Most small bodies in the solar system are found in two distinct regions, known as the Main Belt and the Kuiper Belt. However, certain small bodies, such as Near-Earth Objects (NEOs), have orbits that bring them into close proximity with the Earth and in some cases even collide with our planet. The goal of Impact Monitoring (IM)
is to assess the risk of collision of a small body with Earth. Understanding the potential risk posed by an asteroid and monitoring objects with a higher risk of collision is crucial for developing strategies for planetary defense. Since in the following years, vast amounts of data from astronomical surveys will become available, it is essential to implement a preliminary filter to determine which objects should be prioritized for follow-up using traditional IM methods.

We present a novel method for estimating the Minimum Orbit Intersection Distance (MOID) of a NEO based on artificial Neural Networks (NNs). The MOID is defined as the minimum distance between the two osculating Keplerian orbits of the Earth and the NEO as curves in the three-dimensional space; it is usually used as an indicator of the possibilities of a collision between the asteroid and the Earth, at least for the period during which the Keplerian orbit of the asteroid provides a reliable approximation of the actual orbit. Since Machine Learning (ML) has gained enormous popularity in the last few years and has been applied also to some Celestial Mechanics problems, we decided to try to estimate the MOID with a multilayer feedforward NN, which takes as input the coordinates of the asteroid at a specified epoch. After being trained on an artificial dataset of about 800,000 NEOs generated with NEOPOP, the NN has been tested on the currently known population of Near-Earth Asteroids. The network exhibits near-instantaneous predictions of the MOID and achieves a mean absolute error of approximately 10−3 on the test set. Fig. 1 shows the histogram of the actual and predicted values. The overestimation of the number of asteroids with a MOID value of 0 is due to the activation function used in the final layer of the NN, namely ReLU, which, by definition, outputs 0 for any negative input. By selecting a threshold value of 0.05, we transformed the regression problem into a classification problem. In particular, we consider the positive class the one formed by all asteroids with a predicted MOID exceeding the threshold. The resulting accuracy and false positive rate (FPR) are approximately 96.61% and 2.56%, respectively. To reduce false positives, we propose to prioritize testing with classical IM methods every object with a predicted MOID of 0.10 or less. In fact, we believe that ML should serve as an initial screening tool, enabling us to prioritize follow-up assessments using traditional IM methods when managing large volumes of data.

Figure 1: Histogram of the actual and predicted values

As a follow-up, we are testing the possibility of developing a NN capable of predicting the MOID starting from computable quantities derived directly from the observations. This would eliminate the need to calculate a preliminary orbit and apply the differential corrections procedure.
Specifically, we intend to use as input vector for the NN an attributable (α, δ, ˙ α, ˙δ ), together with the second derivatives of right ascension and declination. In fact, given m ≥ 3 optical observations (αi, δi) at times ti, it is easy to compute, with a quadratic fit of both angular variables separately, the quantities α, ˙ α, ¨α and δ, ˙δ, ¨δ. Although this task is more difficult, both in terms of data acquisition and NN training, the preliminary findings are promising.

In conclusion, this research represents a step forward in addressing the urgent need for effective IM techniques, partially answering the question of whether ML can serve as a preliminary filter for some orbit determination problems.


[1] Vichi, V., Tommei, G. Exploring the potential of neural networks in early detection of
potentially hazardous near-earth objects, Celest Mech Dyn Astron 137, 17 (2025).

 

How to cite: Vichi, V. and Tommei, G.: Exploring the potential of neural networks in early detection of potentially hazardous Near-Earth Objects, EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-149, https://doi.org/10.5194/epsc-dps2025-149, 2025.

F84
|
EPSC-DPS2025-709
|
ECP
|
On-site presentation
Shunjing Zhao, Xian Shi, and Hanlun Lei

Understanding the surface and subsurface temperature distributions of small bodies in the Solar System is fundamental to thermophysical studies, which provide insight into their composition, evolution, and dynamical behavior [1,2]. Thermophysical models are essential tools for this purpose, but conventional numerical treatments are often computationally expensive. This limitation presents significant challenges, particularly for studies requiring high-resolution simulations or large-scale, repeated calculations across parameter spaces.

To overcome these computational bottlenecks, we developed ThermoONet -- a deep learning-based neural network designed to efficiently and accurately predict temperature distributions for small Solar System bodies [3,4]. ThermoONet is trained on results from traditional thermophysical simulations and is capable of replicating their accuracy with dramatically reduced computational cost. We apply ThermoONet to two representative cases: modeling the surface temperature of asteroids and the subsurface temperature of comets. Evaluation against numerical benchmarks shows that ThermoONet achieves mean relative errors of approximately 1% for asteroids and 2% for comets, while reducing computation time by over five orders of magnitude.

We test the ability of ThermoONet with two scientifically compelling yet computationally heavy tasks. We model the long-term orbit evolution of asteroids (3200) Phaethon and (89433) 2001 WM41 using N-body simulations augmented by instantaneous Yarkovsky accelerations derived from ThermoONet-driven thermophysical modelling [3]. Results show that by applying ThermoONet, it is possible to employ actual shapes of asteroids for high-fidelity modelling of the Yarkovsky effect. Furthermore, we employ ThermoONet to simulate water ice activity of comets [4]. By fitting the water production rate curves of comets 67P/Churyumov-Gerasimenko and 21P/Giacobini-Zinner, we show that ThermoONet could be of use for the inversion of physical properties of comets that are difficult to achieve with traditional methods.

[1] Delbo, M., Mueller, M., Emery, J.P., Rozitis, B. and Capria, M.T., 2015. Asteroid thermophysical modeling. Asteroids iv1, pp.107-128.

[2] Prialnik, D., Benkhoff, J. and Podolak, M., 2004. Modeling the structure and activity of comet nuclei. Comets II, 1, pp.359-387.

[3] Zhao, S., Lei, H. and Shi, X., 2024. Deep operator neural network applied to efficient computation of asteroid surface temperature and the Yarkovsky effect. Astronomy & Astrophysics, 691, p.A224.

[4] Zhao, S., Shi, X. and Lei, H., 2025. ThermoONet: Deep learning-based small-body thermophysical network: Applications to modeling the water activity of comets. Astronomy & Astrophysics, in press.

How to cite: Zhao, S., Shi, X., and Lei, H.: ThermoONet -- Deep Learning-based Small Body Thermophysical Network, EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-709, https://doi.org/10.5194/epsc-dps2025-709, 2025.

F85
|
EPSC-DPS2025-1601
|
ECP
|
On-site presentation
Alexander Sivitilli, Andrew Marshall-Lee, and Apostolos Christou

Asteroid families are groups of small bodies originating from a common progenitor asteroid that fragmented through collisional, tidal or rotational disruptions. Identifying these groups holds the benefit of furthering our understanding of the solar system’s dynamical and compositional evolution. Traditional ways to classify these families through the Hierarchal Clustering Method (HCM) [1] in proper orbital parameter space have a unique sensitivity to added data points. This is due to the increased possibility of chaining effects [2] that merge families without shared collisional histories into single ones, especially in the cases of complex or overlapping distributions. This sensitivity trade-off results in so-called 'halo' asteroids being excluded from core family membership.

The upcoming Vera C. Rubin Observatory’s Legacy Survey of Space and Time (LSST) is expected to detect more than 5 million new asteroids[GU1]  [3]. Entering this era of Big Data for asteroids means an increasing likelihood of chaining effects. Introducing additional parameters, such as magnitude and albedo, can add new constraints to avoid this. Artificial Neural Networks (ANNs) offer a promising approach to identifying families [4] by enabling such constraints. Training models on both real and synthetic data transforms large volumes into an asset rather than a limitation.

As these ANNs are only recently being developed, we must consider methods of confirming their family classifications, both in the realm of synthetic training data and for real results. Such methods require dedicated tools to render large, sparse datasets while also enabling productive interaction with them. A robust human interface for both data and results is thus needed that can handle multiparameter data.

The immersive Data Visualisation Interactive Explorer (iDaVIE) [5] software suite is an open-source, Unity-based virtual reality (VR) tool developed alongside the H I astronomy community for quality control of automated source finding of galaxies in SKA pathfinder projects. With iDaVIE, researchers in the field have successfully used VR both to identify real H I emission that was missed and to remove spurious detections [6-8]. Although iDaVIE’s primary scientific use has been through its tools for volumetric data analysis, its secondary particle rendering and interaction capabilities have remained underutilised. Asteroid family classification with ANNs presents a well-suited application for this flavour of iDaVIE.

In this contribution, we present early results on integrating iDaVIE to assist with ANN model development, particularly in the verification of asteroid family classification and the labelling of training data. The VR environment enables researchers to explore and validate families, whether derived from traditional techniques or modern synthetic methods. We have enhanced iDaVIE’s particle rendering capabilities and interactive tools to support this application, allowing users to inspect model classification results in an immersive 3D space. New features, such as a basic GUI interface, conditional subsets, navigation aids, and fine-tuning adjustments for rendering parameters, are described in how they were implemented and used. Many of these were either adapted from the volumetric context or developed from scratch based on our specific use cases.

Figure 1 Using iDaVIE to investigate main belt family halos in proper orbital parameter space with data from the Asteroid Family Portal [9]

We also review these use-cases, specifically how we verified results from an ANN model developed to classify asteroid halo members of the (163) Erigone family. Here we took the output probabilities of family membership and compared it to previous study results that relied on the HCM. By mapping proper orbital and physical properties to various spatial and rendering parameters in the 3D virtual space (see Figure 1), we were able to identify which halo asteroids were classified appropriately. The new interaction tools allowed us to isolate these asteroids and iteratively adjust parameters to refine and confirm results.

Our contribution demonstrates how immersive data visualisation can play a key role in advancing both methodological development and scientific discovery in planetary science, particularly when working alongside Artificial Intelligence. This also serves to open up the discussion to the broader planetary science community of how such interactive tools can be introduced to modern research workflows. A critical next step is collecting input on new features and tools that could support other applications, particularly with aiding in the development of machine learning models.

Acknowledgements:

Astronomical research at the Armagh Observatory & Planetarium is grant-aided by the Northern Ireland Department for Communities (DfC). Work by the authors was supported by the Leverhulme Trust. The visualisation work was supported by the iDaVIE development team hosted at the Inter-University Institute of Data Intensive Astronomy.

Bibliography:

  • Zappala, V., et al., Asteroid Families. I. Identification by Hierarchical Clustering and Reliability Assessment. The Astronomical Journal, 1990. 100: p. 2030.
  • Nesvorný, D., M. Brož, and V. Carruba, Identification and Dynamical Properties of Asteroid Families, in Asteroids IV, P. Michel, F.E. DeMeo, and W.F. Bottke, Editors. 2015. p. 297-321.
  • Abell, P.A., et al., Lsst science book, version 2.0. 2009.
  • Carruba, V., et al., Machine learning classification of new asteroid families members. Monthly Notices of the Royal Astronomical Society, 2020. 496(1): p. 540-549.
  • Jarrett, T., et al., iDaVIE: Immersive Data Visualisation Interactive Explorer, in Zenodo Software. 2024, Zenodo: Zenodo. p. 4614115.
  • Deg, N., et al., WALLABY pilot survey: the potential polar ring galaxies NGC 4632 and NGC 6156. Monthly Notices of the Royal Astronomical Society, 2023. 525: p. 4663-4684.
  • Glowacki, M., et al., A serendipitous discovery of H I-rich galaxy groups with MeerKAT. Monthly Notices of the Royal Astronomical Society, 2024. 529: p. 3469-3483.
  • Kleiner, D., et al., A MeerKAT view of pre-processing in the Fornax A group. Astronomy and Astrophysics, 2021. 648: p. A32.
  • Novaković, B., et al., Asteroid families: properties, recent advances, and future opportunities. Celestial Mechanics and Dynamical Astronomy, 2022. 134(4): p. 34.

How to cite: Sivitilli, A., Marshall-Lee, A., and Christou, A.: Supporting Machine Learning-Based Asteroid Family Classification with Immersive Visualisation, EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-1601, https://doi.org/10.5194/epsc-dps2025-1601, 2025.

F86
|
EPSC-DPS2025-496
|
ECP
|
On-site presentation
Yaqiong Wang, Huan Xie, and Daniel Hestroffer

Asteroid exploration, sample return, and defense missions critically require accurate three-dimensional (3D) reconstructions of target celestial bodies, including global shape models and high-fidelity local topographic details (Richardson et al. 2022; Thomas et al. 2023). Traditional methods, such as photogrammetry, stereophotoclinometry, and shape-from-shading have been extensively used to derive global and local 3D models from images (Gaskell et al. 2008; Palmer et al. 2022)). However, achieving reliable geometric accuracy and capturing fine-scale terrain features from limited observational data—such as sparse viewpoints and constrained illumination—remains a significant challenge.

This paper focuses on local topographic refinement, a process crucial for various advanced applications. It supports tasks such as high-precision engineering and scientific site reconstruction, provides high-detail maplets enabling more accurate global 3D reconstructions, and offers high-fidelity topographic features essential for precision Terrain Relative Navigation (TRN) (Gaskell et al. 2008, 2023; Olds et al. 2015).

The core innovation of the proposed method is the ability to achieve detailed topographic reconstruction of a local surface region using only a limited number of images—potentially even a single image—under constrained viewing and illumination conditions. To address the ill-posed problem of recovering high-frequency topographic details from sparse or limited observational data, we propose leveraging powerful diffusion priors from generative models pre-trained on low-resolution images and digital elevation models (DEMs). These priors encode geometric knowledge that can guide the reconstruction process, thereby significantly reducing the stringent requirement for extensive multi-view, multi-illumination image sets. Furthermore, the proposed generative framework is designed to inherently incorporate principles similar to GeoWizard’s geometry switcher and cross-domain attention mechanisms (Fu et al. 2024). This design facilitates a joint estimation of both elevation and implied surface orientation (normals). This joint estimation aims to ensure high geometric consistency between the derived elevation map and its corresponding surface normals, ultimately yielding a more reliable and geometrically accurate high-resolution topography.

Experimental validation was conducted on real data of NASA's OSIRIS-REx Mission to Asteroid Bennu, including MapCam, PolyCam, and NavCam images. Starting with 75 cm resolution local DEMs (99×99 grids), high-resolution DEMs were iteratively generated at 25 cm, 18 cm, and 10 cm resolutions. This process involved refining the DEM from the preceding resolution level using just 1-3 corresponding high-resolution images for each successive target resolution. The DEM with spatial resolution of 10 cm achieves less than 1 cm average root mean square error (RMSE) compared to NASA published 5 cm ground truth. Additionally, to evaluate the effectiveness of the refined DEM, we rendered it using the same illumination and observation conditions as the captured image. Scale-Invariant Feature Transform (SIFT) was then applied to match the rendered image against the captured image. The high matching success rate indicates the reconstructed terrain captures effective image textures, which further validates the reliability of our method in capturing terrain details. The refined DEM also exhibit enhanced compatibility with TRN task. The landmark matching success rate exceeds 95%, which is significantly higher than the rates achieved without terrain detail.

Reference

Fu, X., Yin, W., Hu, M., et al. (2024). Geowizard: Unleashing the diffusion priors for 3d geometry estimation from a single image. In European Conference on Computer Vision Conference (pp. 241-258)

Gaskell, R. W., Barnouin‐Jha, O. S., Scheeres, D. J., et al. (2008). Characterizing and navigating small bodies with imaging data. Meteoritics & Planetary Science, 43(6), 1049-1061.

Gaskell, R. W., Barnouin, O. S., Daly, et al. (2023). Stereophotoclinometry on the OSIRIS-REx mission: mathematics and methods. The Planetary Science Journal, 4(4), 63.

Palmer, E. E., Gaskell, R., Daly, M. G., et al. (2022). Practical stereo-photoclinometry for modeling shape and topography on planetary missions. The Planetary Science Journal, 3(5), 102.

Richardson, D. C., Agrusa, H. F., Barbee, B., et al. (2022). Predictions for the dynamical states of the Didymos system before and after the planned DART impact. The planetary science journal, 3(7), 157.

Olds, R., May, A., Mario, C., et al. (2015). The application of optical based feature tracking to OSIRIS-REx asteroid sample collection. In AAS Guidance, Navigation, & Control Conference, AAS pp. 15-124.

Thomas, C. A., Naidu, S. P., Scheirich, P., et al. (2023). Orbital period change of Dimorphos due to the DART kinetic impact. Nature, 616(7957), 448-451.

How to cite: Wang, Y., Xie, H., and Hestroffer, D.: High-Fidelity Local 3D Terrain Reconstruction for Asteroids via Generative Modeling, EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-496, https://doi.org/10.5194/epsc-dps2025-496, 2025.

AI and ML in Planetary Science
F87
|
EPSC-DPS2025-1703
|
ECP
|
On-site presentation
Mariarca D'Aniello, Claudio Velotti, Gennaro Esposito Mocerino, and Carlo Donadio

Introduction

Fluvial networks provide key insights into surface processes, underlying lithology, and the tectonic and climatic history of planetary bodies. On Earth, these drainage systems have been studied extensively, but classification still relies heavily on visual interpretation. Such manual methods are not scalable, especially when applied to large datasets or remote planetary terrains where direct observation is limited. This study presents a fully automated, unsupervised machine learning framework designed to classify fluvial patterns objectively, initially tested on terrestrial rivers but developed with a clear orientation toward planetary applications.

 

Methods

The classification pipeline begins with HydroRIVERS v1.0, a global hydrographic dataset, from which river segments are extracted. For each segment, a suite of morphometric descriptors is calculated, encompassing parameters related to geometry, orientation, curvature, and network topology. These features are used as inputs for unsupervised clustering. Three clustering techniques were evaluated: K-Means, Gaussian Mixture Models (GMM), and CLARANS. Among them, K-Means consistently delivered the highest internal validation scores across silhouette, Davies-Bouldin, and Dunn indices. CLARANS, while computationally more intensive, offered greater interpretability by selecting real river segments as cluster centers, which is especially valuable in a geomorphological context. Classification at the river-system level was then achieved by exploring multiple strategies, including majority voting among segments, and aggregation of morphometric features by their mean or median; they all produced reliable and coherent drainage typologies.

Results

The optimal solution identified six distinct morphometric clusters. These clusters were subsequently interpreted and labeled based on canonical fluvial patterns, such as dendritic, sub-dendritic, radial, trellis, and rectangular. This interpretation followed geomorphological typologies standardized by Donadio et al. (2021) (Figure 1), enabling the translation of purely numerical groupings into geological meaning.


Figure 1. Different classes of drainage patterns following the scheme proposed by Donadio et al. 2021: a) dendritic; b) sub-dendritic; c) pinnate; d) parallel; e) radial; f) rectangular; g) trellis; h) angular; i) annular; j) contorted.

 

Validation against a reference dataset containing thousands of manually classified segments showed strong agreement. The K-Means algorithm achieved high consistency with expert labels, while CLARANS proved useful in highlighting key reference cases. To visualize the effectiveness of clustering, a principal component analysis (PCA) was performed, projecting the high-dimensional feature space into three dimensions (Figure 2).


Figure 2. Images of the resulting distribution, where the six clusters form distinct, compact groupings: 2a shows the clustering results with k-means algorithm; 2b shows the clustering results with CLARANS algorithm.

 

Planetary Application and Discussion

The versatility of the proposed framework makes it well suited for planetary science. Many bodies in the Solar System, such as Mars and Titan, show evidence of ancient or current fluvial activity. These features, visible in high-resolution orbital imagery, share morphometric properties with terrestrial rivers. The automated and objective nature of our method is ideal for application to planetary surfaces, where field validation is not possible and manual classification is impractical. On Mars, dendritic valley networks such as Warrego Valles (Figure 3) could be objectively identified and differentiated from structurally controlled systems like those near Valles Marineris. On Titan, channel networks via Cassini RADAR data present complex patterns possibly influenced by tectonics or cryovolcanism. The ability to distinguish between morphometric types without supervision may support hypotheses on climate evolution and crustal processes in these environments.

Figure 3. A visual comparison between a dendritic river system on Earth and a Martian network: 3A shows the original image of Po River (Italy) before pre-processing; 3B shows the manually extracted image of Po River. 3B shows the original image of Warrego Valles (Mars) before pre-processing; 3B shows the manually extracted image of Warrego Valles (Mars).

In the future, this approach may also be extended to Venus, where missions like ESA’s EnVision could reveal previously hidden fluvial features through synthetic aperture radar. Additionally, expanding the feature set to include elevation data and terrain roughness could further enhance classification capability and geological interpretation.

 

Conclusions

This study introduces a robust, unsupervised framework for the classification of fluvial networks that moves beyond the limitations of subjective interpretation. While trained and tested on terrestrial data, the method is explicitly designed with planetary applications in mind. It demonstrates strong potential for analyzing fluvial systems on Mars, Titan, and other planetary bodies, offering new perspectives on landscape evolution and hydrologic history. Future work will focus on applying the pipeline to specific planetary case studies, enhancing feature sets, and integrating data from multiple remote sensing platforms.

How to cite: D'Aniello, M., Velotti, C., Esposito Mocerino, G., and Donadio, C.: A Novel Machine Learning Approach for Objective Fluvial Network Classification: Earth & Beyond, EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-1703, https://doi.org/10.5194/epsc-dps2025-1703, 2025.

F88
|
EPSC-DPS2025-982
|
On-site presentation
Petr Pokorny, Josef Hanus, Franck Marchis, and Thomas M. Esposito

We present an overview of the performance and improvements of the Occultation Detection neural Network framework: ODNet, a convolutional neural network for asteroid occultation detection (Cazeneuve+ 2023). ODNet is the backbone of the occultation detection program of the Unistellar network; an expanding network of 15,000+ smart telescopes distributed around the world that allows citizen scientists to join various observational campaigns and provide invaluable data for multitude of scientific projects.

The original ODNet was built using the TensorFlow architecture and was trained on mostly synthetic data. The purpose of ODNet is to provide an independent and reproducible measure of the likelihood of the occultation event in the observed data set (a set of individual image frames), remove any influence of human analysis, and to allow a fully autonomous detection of occultations in all Unistellar observations in the occultations campaign.

The original results were promising. ODNet was able to analyze raw data from occultation events in minutes with very high precision (91%) and recall (87%). This enabled a completely automated pipeline that is able to handle multiple occultation events per day from multiple locations and observers. However, as more observations were made and different observing configurations, conditions, and telescope models appeared the efficiency and precision of the original model was not satisfactory.

Here, we present improvements to the original ODNet framework by increasing its precision and recall to >95%, ability to handle very short occultation events (<4 frames / 1 second), deal with more diverse observation conditions, and able to deal with datasets from various telescopes. We improved identification of false positives and were able to recover a number of short occultation events in the archival data. We also added a fully automated light curve analysis that provides additional information about the nature of the observation and significance of the event.

Ultimately, we show our latest advancements in our search for a lightweight solution that is based on a YOLOv8 framework (You-only-look-once) that would allow the observer to analyze both predicted and serendipitous occultation events in real time while using processing units on modern mobile devices.

 

How to cite: Pokorny, P., Hanus, J., Marchis, F., and Esposito, T. M.: Detecting stellar occultations using machine-learning techniques and smart telescopes powered by citizen scientists, EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-982, https://doi.org/10.5194/epsc-dps2025-982, 2025.

F89
|
EPSC-DPS2025-1625
|
ECP
|
On-site presentation
Riccardo La Grassa, Cristina Re, Elena Martellato, Adriano Tullo, Silvia Bertoli, Valentina Galluzzi, Lorenza Giacomini, Natalia Amanda Vergara Sassarini, Gabriele Cremonese, and maddalena faletti

Introduction

Impact cratering shapes the surfaces of airless bodies such as the Moon and Mercury, providing both a record of impacts history and a chronostratigraphic tool for dating geological units. Traditional crater catalogs, compiled manually or via semi-automated algorithms, remain incomplete below ~1 km diameter on the Moon[1] and ~5 km on Mercury[2], limiting studies of small-scale resurfacing, secondary-crater populations, and polar processes.

This study has compiled the first impact-crater dataset for Mercury by a multimodal deep-learning pipeline. Our approach adapts YOLOLens single-stage object detector model initially fine-tuned on lunar imagery to 3-bands inputs and through domain-adaptive transfer learning is applied on Mercury.

The key innovation is the construction of a three-band cube for each orthographic tile:

 

  • WAC/MDIS visible spectrum mosaics

  • Digital Terrain Model (DTM) 

  • Hillshaded DTM version

 

Methods

Each band is co-registered at 100m/px resolution and normalized to [0,1] to ensure balanced feature learning . Tiles overlap by 128 px to mitigate edge effects, and a triple-scale subsampling captures craters whose diameters exceed individual tile dimensions .

To achieve acceptable generalization model capabilities, a balancing function is applied across latitude ranges and FoVs to ensure a robust model, finally, the entire workflow is shown in Fig.1.

                                                                         

                                                                                                            Fig. 1: Workflow.

The YOLOLens model[3], leverages on super-resolution techniques to improve impact crater detection on planetary surfaces. By addressing challenges such as low spatial resolution and variable solar illuminance in satellite imagery, YOLOLens demonstrated enhanced precision compared to state-of-the-art methods.

The parameters are modified at the first/last layers such that the model creates super-resolution (SR) images from a single-channel image. Then, the SR output is used by an object-detection model in an end-to-end model.

Results

The model has been trained across the lunar dataset (Robbins catalogue[1]) to demonstrate the effectiviness of the multimodal approach respect to the catalog derived in[4], thus, we applied a transfer learning step rapidly adapts to Mercury’s distinct albedo contrasts while retaining learned shape priors from the Moon.

Validation on multiple quadrangles yields recall rates of 58% – 95.7% across primary/secondary craters, confirming robust cross-body generalization (Tab.1 and Tab.2).

TABLE 1: Comparison of Crater Diameter Counts for Mercury.

 

 

The Herrick catalog reports initial insights derived from the database, while also addressing several limitations that constrain future analyses. Our validation set was compiled through different craters manual counts[5,6,7] see Tab.2 .

TABLE 2: Summary of crater detection performance by YOLOLens across different datasets.

 

In the investigated regions, high recall values are obtained, suggesting robust performances of the YOLOLens model to identify impact craters. Since the areas used for comparison are representative of all the latitudes, such robustness could be extended to all the Hermean surfaces. Fig. 2 shows the size-frequency distributions.

 

Fig. 2: Comparison of size-frequency distributions (SFDs) between the ME6M300TGT and [6], [7].

 

The development of a comprehensive global catalog allows us to investigate the distribution of craters and their morphometric characteristics, and from them observe clues about the evolution of the hermean surface. The crater density information provides us with important insights about the age of the different regions and the rates of degradation or remodeling (Fig.3).

Fig. 3: (a) Extract from the Geological Map of the Hokusai quadrangle (H05)[8] and (b) the same region extracted from the ME6M300TGT to derive the density map by diameter range.

An important parameter for the surface characterization is the depth-to-diameter (d/D) ratio, which is useful to define the post-impact degradation and modification of the crater. The d/D was measured for the entire catalog. Fig.4 shows the distribution of the d/D of the craters.

 

                                                               

Fig. 4: The map shows the d/D distribution of Mercury’s craters. A) Diameter > 15km over Mercury, B) The North Pole (> 5km) and C) The South Pole (> 15km).

 

This study presents advancements in planetary surface analysis through the development of a comprehensive crater catalog for Mercury, which exploit images and topographical information simultaneously. The YOLOLens model has demonstrated its capability to detect craters with great accuracy and scale. The datasets also integrate morphometric information, enabling novel insights into crater distributions and the geological evolution of these celestial bodies. In addition, the catalog can function as an important tool in cartographic endeavors currently compiled as a multi-mapper regional-scale project[9].

 

References

 

[1] Robbins (2019) J. Geophys. Res. Planets, 124(4), 871-892.

[2] Herrick et al. (2011) Icarus 215(1), 452-454.

[3] La Grassa et al. (2023) Remote Sens. 15(5), 1171.

[4] La Grassa et al. (2025) ISPRS J. Photogramm. 220, 75-84.

[5] Giacomini et al. (2020) Geosci. Front. 11(3), 855-870.

[6] Galluzzi et al. (2016) J. Maps 12(sup1), 227-238.

[7] Martellato et al. (2023) Bull. AAS, 55(8), #116.06.

[8] Wright et al. (2019) J. Maps 15(2), 509-520.

[9] Galluzzi (2019) In: Hargitai (eds) Planetary Cartography and GIS, Springer.

Acknowledgements: we gratefully acknowledge funding from the Italian Space Agency (ASI) under ASI-INAF agreement 2024-40.HH.0

How to cite: La Grassa, R., Re, C., Martellato, E., Tullo, A., Bertoli, S., Galluzzi, V., Giacomini, L., Vergara Sassarini, N. A., Cremonese, G., and faletti, M.: Mercury Global Crater Catalog using Multimodal Deep Learning for Crater Detection and Morphometric Analysis, EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-1625, https://doi.org/10.5194/epsc-dps2025-1625, 2025.

F90
|
EPSC-DPS2025-1547
|
ECP
|
On-site presentation
Cédric Millot, Cathy Quantin-Nataf, and Matthieu Volat

Introduction: The large amount of data from orbital imagery requires time-consuming analyses, especially for high-resolution products. On Mars, the High Resolution Imaging Science Experiment (HiRISE, McEwen et al., 2007) gathers several thousand images (McEwen et al., 2024). Automated algorithms are needed to extract data from such a large number of images. Small impact craters (with a diameter below 50 meters) are geological features that provide key information about the subsurface structure or the landscape changes through time. For now, a global mapping effort has been completed for craters larger than 1 km (Robbins et al., 2012). The challenge to tackle is now to pursue this task for lower diameters by using automated algorithms. 

Methods: We trained the well-known You Only Look Once (YOLO, Redmon et al., 2016) v11 algorithm to segment small craters over HiRISE images (McEwen et al., 2007). YOLO's architecture is a Convolutional Neural Network (CNN, e.g. Ronneberger et al., 2015) used for the segmentation and classification of images, and has been successfully applied on Martian data (e.g. Benedix et al., 2020). We used an exhaustive count (Millot et al., 2024) of 83820 craters located in Candor Chasma, with diameters ranging from ~2 m to 270 m over the HiRISE image ESP_024255_1750, for the training of the algorithm. We cut the original HiRISE image and the target file into tiles of 256 by 256 pixels. Then, we split the resulting dataset into training, validation, and test sets following a 70%, 20%, and 10% ratio. We trained the YOLO medium algorithm ('yolo11m-seg') over 100 epochs, following the set of hyperparameters defined in Table 1. Finally, we tested the algorithm over a new HiRISE image, ESP_026477_1615, located in the magmatic terrains of Tyrrhenus Mons.

Parameters

Value

Batch size

16

Epochs

100

Image size

640 pixels

Initial learning rate

0.02

Final learning rate

0.01

 

Table 1: The set of hyperparameters for the ‘yolo11m-seg’ algorithm training.

Results and perspectives: The training process reached a mean Average Precision (mAP) of around mAP = 65% for the best models we trained. Precision P and recall R were respectively P = 68% and R = 58% for the best models. Most of the medium-sized craters, between 5 and 20 m in diameter, were segmented correctly by the algorithm after the training. Improvements can still be made on small (< 5 m) and large (over > 50 m) crater detections and segmentations. Figure 1 displays some examples of four tiles from the HiRISE image ESP_026477_1615. Panel 1a shows that the largest craters from a cluster are correctly mapped (blue masks), while some of the smallest in diameter are missing. Panel 1b presents one accurate result over a fresh and large crater (top left corner of the tile) along with a wrong detection (lowest part of the tile). Subplot 1c highlights the two detections of a large crater and a small crater set on a lava terrain. Finally, panel 1d underlines that one large crater (more than 100 m in diameter) is not mapped by the algorithm. Results are still variable but remain encouraging, as crater mapping has been made over a new HiRISE image, which has not been involved in the training process.

The potential to automatically map the small-sized craters (< 1 km) on planets with an atmosphere will bring new insights into the active crater modification processes (erosion or aeolian infilling).

Figure 1: Four tiles from the HiRISE image ESP_026477_1615 summarizing predictions (blue masks) from our trained YOLO model.

References

  • K. Benedix, et al., “Deriving surface ages on Mars using automated crater counting.” Earth and Space Science, 7, e2019EA001005. https://doi.org/ 10.1029/2019EA001005. 2020.
  • A McEwen, et al., “Mars reconnaissance orbiter's high resolution imaging science experiment (HiRISE).”Journal of Geophysical Research: Planets 112.E5 (2007).
  • A.S. McEwen, et al., “The high-resolution imaging science experiment (HiRISE) in the MRO extended science phases (2009–2023)”, Icarus, 419, https://doi.org/10.1016/j.icarus.2023.115795, 2024.
  • Millot, “Dataset from Millot et al., 2024”. Zenodo. doi: 10.5281/zenodo.14035904. 2024.
  • Redmon, et al., “You Only Look Once: Unified, Real-Time Object Detection,”  IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, pp. 779-788, doi: 10.1109/CVPR.2016.91, 2016.
  • J. Robbins, & B.M. Hynek, “A new global database of mars impact craters > 1 km: 1. database creation, properties, and parameters”. Journal of Geophysical Research: Planets, 117 (E5). Retrieved from https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2011JE003966 doi:https://doi.org/10.1029/2011JE003966, 2012a.
  • Ronneberger, et al., “U-net: Convolutional networks for biomedical image segmentation.” Medical image computing and computer-assisted intervention–MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18. Springer international publishing, 2015.

How to cite: Millot, C., Quantin-Nataf, C., and Volat, M.: Supervised Deep Learning for Mapping Small Craters on Mars, EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-1547, https://doi.org/10.5194/epsc-dps2025-1547, 2025.

F91
|
EPSC-DPS2025-691
|
On-site presentation
Giulio Macrì and Stefano Casotto

Introduction:  Determining the internal mass distribution of planetary bodies, such as Ganymede, remains a challenging problem due to observational degeneracies. In the 2030s the JUICE mission with its several instruments will orbit Ganymede and provide information on parameters that depend on the interior structure of the moon, including estimates of the polar moment of inertia, the radial and gravitational Love numbers and associated phase-lags, the longitudinal libration amplitudes, as well as the phase and amplitude of the induced magnetic field due to the presence of a subsurface ocean. In order to impose the constraints on the interior structure in the most effective way a joint inversion of all available parameters is ideally  necessary. In this work, we use a machine learning approach  to predict the thicknesses and densities of Ganymede's internal layers and ocean conductivity using these parameters. To achieve this,  a synthetic dataset of  plausible internal structure models of Ganymede is generated via Monte Carlo sampling. For each of these internal structures we compute the corresponding observable parameters (Love numbers, libration amplitude, polar moment of inertia, etc.) using existing  models. We then train a Neural Network on the synthetic dataset to learn the intricate relationships between these parameters and the internal structure model. 

Our model is able to retrieve the internal structure parameters with varying levels of accuracy across different layers, with promising  performance in the prediction the icy shell and ocean thickness and density, ocean conductivity and  thickness of the high-pressure ice layer. The Monte Carlo dropout method is utilized to estimate the uncertainties in the predicted parameters. These results highlight the potential of machine learning as a preliminary and fast tool to detect families of interior structures compatible with the observed parameters. 

 

Interior Structure Model: The interior of the icy satellites is modeled as several spherically symmetric, uniform shells and is completely specified by the values of radius Ri, density ρi, rigidity µi, and viscosity ηi of each layer. The five layers are: an icy shell, a liquid subsurface ocean, an High-Pressure-ice (HP-ice) layer, a silicate mantle and a solid inner core.  For the solid layers we adopt an Andrade rheology, while the ocean and liquid core are treated are treated as inviscid fluids. 

Figure 1: schematic representation of the an internal structure model for Ganymede with five layers.

Dataset and training:  we trained the Neural Network on a synthetic dataset consisting of 107 interior structures, that we generated by performing a Monte-Carlo sampling of the internal structure parameters y, namely: thicknesses and densities of each layer,   icy shell viscosity and ocean conductivity, subject to the total radius and mass constraints. For each interior structure we then computed a set of observables x, including the polar moment of inertia, the radial and gravitational Love numbers h2 and k2 using the ALMA3 code [1], the libration amplitude Ls at the orbital period [2], as well as the amplitude A and phase φA of the induced magnetic field at the orbital period [3]. 

Neural Network architecture. A schematic representation of the Neural Network is shown in Fig. 2.  The minimization of the loss function is performed using Adam optimizer [5]. In order to prevent any issues with model overfitting we adopt early stopping [6]. We train the neural network with 80% of the data set and use the remaining 20% for validation.

Figure 2: schematic representation of the neural network

Montecarlo dropout. In order to capture the uncertainty in the predicted parameters it would be desirable to have a posterior distribution of the interior structure parameters, rather than deterministic values. To do this we use the Montecarlo dropout approach [7], which allows to obtain an approximate Bayesian inference through dropout training. 

Results and Conclusions:  Our model demonstrates significant predictive accuracy in estimating the thickness and density distributions of Ganymede like icy satellites across its five-layer interior. In Fig. 3, we show a comparison between the actual values of the interior parameters y and those predicted by our trained neural network for the validation dataset y. A perfect prediction would fall on the green dashed line. From the validation dataset, the neural network effectively captured the characteristics of the icy shell and ocean, with excellent agreement between predicted and actual values. The HP-ice layer’s thickness was predicted with moderate accuracy, while its density estimates showed higher variability. The model performs poorly in the task of inferring the thickness and density of the core and mantle, suggesting limited sensitivity of the selected observables to these parameters. However, this had to be expected, and is in line with previous results present in the literature. 

 In Fig. 4 we show the posterior distributions obtained with the Montecarlo dropout method corresponding to the set of parameters x* drawn from the synthetic dataset. We observe that the true values of the internal structure parameters y*, shown as dashed vertical lines, fall within the posterior distributions, close to the mean values in the case of the icy shell, the ocean, and HP-ice layer. However, the methodology presented here come short in the task of assessing the uncertainty in the deeper interior, as the intrinsic degeneracy in the inverse problem allows for a broader range of deep interior structures than those predicted by the neural network.

 

Figure 3.  

Figure 4.  

References:  [1] Melini D., et al., 2022, Geophysical Journal International, 231, 1502 [2] Baland R.-M., Van Hoolst T., 2010, Icarus, 209, 651 [3] Vance S. D., et al., 2021, Journal of Geophysical Research: Planets, 126, [4] Srivastava N., et al., 2014, Journal of Machine Learning Research, 15, 1929  [5] Kingma D. P., Ba J., 2014, CoRR, abs/1412.6980 [6] Prechelt L., 1998, Neural Networks, 11, 761 [7] Gal Y., Ghahramani Z., 2016, in Balcan M. F., Weinberger K. Q., eds, Proceedings of Machine Learning Research Vol. 48.

How to cite: Macrì, G. and Casotto, S.: Enforcing multiple constraints on the interior structure of Ganymede: a machine learning approach, EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-691, https://doi.org/10.5194/epsc-dps2025-691, 2025.

F92
|
EPSC-DPS2025-272
|
ECP
|
On-site presentation
Sofia Paraskevaidou

Simulating exoplanetary atmospheres is essential for describing them, estimating their composition, investigating the presence of haze and clouds, and identifying their relationship with observational signatures. With the emergence of JWST (James Webb Space Telescope) and the upcoming ARIEL (Atmospheric Remotesensing Infrared Exoplanet Large-survey) mission (Tinetti et al. 2018; Pascale, Bezawada, and al. 2018), the need for a fast implementation of the classical forward models is more important than ever. Machine learning (ML) can offer a revolutionary solution to this challenge by providing efficient surrogate models or emulators that approximate the behavior of specific components or the whole system, significantly accelerating the
simulation pipeline.
In our research in general, we are using a 1D self-consistent model including haze/cloud microphysics, disequilibrium chemistry, and radiative transfer interactions to simulate the atmospheric structure of temperate exoplanets from the deep (103 bar) to the upper thermosphere ( 10−10 bar). This forward model was used in exoplanet studies, like in Arfaux and Lavvas (2022) and many other studies (Lavvas et al. 2019; Arfaux and Lavvas 2023; Lavvas, Paraskevaidou, and Arfaux 2023), offering a more detailed correspondence of the atmospheric composition with the transit observations. In this work we will try to develop a supervised neural network-based surrogate model, with inspiration from Hendrix, Louca, and Miguel (2023), trained on the outputs of the forward model, enabling fast approximation of atmospheric responses for a range of exoplanetary parameters (like planet mass, stellar radius, temperature-pressure profile, stellar flux, metallicity, etc.) without repeated execution of the full model.
We are currently replacing the microphysics of the forward model, which simulates the photochemical haze particle size distribution over a grid of particles radii, with a neural network trained on a given temperature (isothermal), pressure profile, viscosity (which correlates with metallicity), eddy mixing, and gravity. Our goal is to replace the entire forward model (or at least the most time-consuming parts) and improve atmospheric characterization speed and accuracy. This work has the potential to greatly benefit the research community by making comparative studies across planetary systems more accessible to a wider range of groups. It can also be used in hybrid frameworks in which ML handles expensive subcomponents (e.g., radiative transfer) and traditional models handle dynamics, preserving physical interpretability while increasing efficiency.

References
Arfaux, Anthony and Panayotis Lavvas (June 2022). “A large range of haziness conditions in hot-Jupiter atmospheres”. In: Monthly Notices of the Royal Astronomical Society 515.4, pp. 4753–4779. issn: 0035-8711. doi: 10.1093 /mnras /stac1772. eprint: https : / / academic.oup .com /mnras /article - pdf/515/4/4753/45475436/stac1772.pdf. url: https://doi.org/10.1093/mnras/stac1772.
Arfaux, Anthony and Panayotis Lavvas (Apr. 2023). “A physically derived eddy parametrization for giant planet atmospheres with application on hot-Jupiters”. In: Monthly Notices of the Royal Astronomical Society 522.2, pp. 2525–2542. issn: 0035-8711. doi: 10 . 1093 / mnras / stad1135. eprint: https : / /
academic . oup . com / mnras / article - pdf / 522 / 2 / 2525 / 50113176 / stad1135. pdf. url: https ://doi.org/10.1093/mnras/stad1135.
Hendrix, Julius L A M, Amy J Louca, and Yamila Miguel (June 2023). “Using a neural network approach to accelerate disequilibrium chemistry calculations in exoplanet atmospheres”. In: Monthly Notices of the Royal Astronomical Society 524.1, pp. 643–655. issn: 0035-8711. doi: 10 . 1093 / mnras / stad1763.eprint: https://academic.oup.com/mnras/article-pdf/524/1/643/54758485/stad1763.pdf. url: https://doi.org/10.1093/mnras/stad1763.
Lavvas, Panayotis, Sofia Paraskevaidou, and Anthony Arfaux (Oct. 2023). “Photochemical hazes clouds in the atmosphere of GJ 1214 b in view of recent JWST observations”. In: 55th Annual Meeting of the Division for Planetary Sciences, id. 223.08. Bulletin of the American Astronomical Society e-id 2023n8i223p08 55.8. url: https://ui.adsabs.harvard.edu/abs/2023DPS....5522308L/abstract.
Lavvas, Panayotis et al. (June 2019). “Photochemical Hazes in Sub-Neptunian Atmospheres with a Focus on GJ 1214b”. In: 878.2, 118, p. 118. doi: 10.3847/1538-4357/ab204e. arXiv: 1905.02976 [astro-ph.EP].
Pascale, Enzo, Naidu Bezawada, and et al. (July 2018). “The ARIEL space mission”. In: Space Telescopes and Instrumentation 2018: Optical, Infrared, and Millimeter Wave. Ed. by Makenzie Lystrup et al. Vol. 10698. Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, 106980H, 106980H. doi: 10.1117/12.2311838.
Tinetti, Giovanna et al. (Nov. 2018). “A chemical survey of exoplanets with ARIEL”. In: Experimental Astronomy 46.1, pp. 135–209. doi: 10.1007/s10686-018-9598-x.

How to cite: Paraskevaidou, S.: Implementing a Neural Network on Forward Models:A Case study for Exoplanet Atmospheres, EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-272, https://doi.org/10.5194/epsc-dps2025-272, 2025.

F93
|
EPSC-DPS2025-969
|
On-site presentation
Matthieu Volat, Cathy Quantin-Nataf, and Cédric Millot

 Introduction

Launched in 2006 aboard NASA/Jet Propulsion Laboratory's Mars Reconnaissance Orbiter spacecraft, the HiRISE (High-Resolution Imaging Science Experiment)[1] has provided the highest resolution (up to 25cm/pixel) of orbital imagery of Mars for more than twenty years. It features a set of 3 different color filtered CCDs: the RED array has a range of 570-830nm, while the blue-green (BG) and near-infrared (IR) filters covers respectively less than 580nm and greater than 790nm. Color imagery provide valuable information to create geological maps, such as was done by Mandon et al[2].

The RED sensor provides a wide angle of view, but the "color" sensors (BG, IR) have a narrower coverage of 20% of the center of the RED swath. This limitation, aggravated by the already low global coverage of the HiRISE dataset, means planetary scientists must work with very limited data.

The overlap of the RED and colored datasets mean we have a strong target set to train a deep neural network (DNN) to extrapolate the NIR and BG channels and provide an extended colored dataset. While there have been various work on using such algorithm to colorize imagery, we are not aware of previous work targeting martian orbital data.

Architecture

We use a straightforward U-Net model, which are based on autoencoders. An autoencoder is the result of a function that convert its input into a different representation (coded) by a function that transform coded data into the original representation. When applied on imagery, the encoding function often take the form of layers of convolution filters, and the decoder of transpose convolutional filters. The U-Net architecture concatenate each encoding layer to the result of the corresponding decoding layer to steer the process, as defined by Ronneberger et Al[3].

We define the images size to 256x256 pixels to run the model on individual data that provide a rich feature set. We define 5 levels of encoding and decoding to account for the input data size.

One signifiant difference with previous works is that given the different nature of HiRISE channels compared to "normal" imagery, we do not convert the RGB data into a perceptual colorspace as it would be usually done, as our channels do not have the same relationship as RGB channels have. We set the model target to be the NIR and BG channels of an HiRISE image.

To train this model, we evaluate the validation results using L1 norm, mean square error (MSE), binary cross-entropy (BCE) function as loss function. When evaluating the structural similarity (SSIM) of validation step, we observe very close results with the MSE providing slightly better results.

Training dataset

The “color” dataset of HiRISE provide us a needed training target. In theory, we could randomly select datasets across the complete catalog, but we found that operational condition randomness leaves us with varying levels of image quality.

A first step in our training was to establish a criteria to find “good” references images. We decided to approach this issue based on how the RED and BG are correlated: both those channels are first normalized and 256 buckets histograms are computed. The histograms values are also normalized by the number of input pixels to that image size do not impact the values. We smooth those histograms with a gaussian filter and compute the product of both histograms and sum the resulting values.

For a first attempt, we selected and scored 228 images over the Mawrth Vallis region using the MarsSI[4] portal to select and process data, both due to the large amount of observation over this area and the diversity of terrain of this area. We selected the 35 best-scored images and split them in 256x256 tiles, our objective being to train the model to run at 1:1 scale. This resulted in a training dataset of 27,555 images, of whom 23,421 where used for loss function computation and 4,134 for validation during the training. With this dataset, we find that the model will converge over 25 epochs.

 Results

As a first step, we expect to validate the results of our colorizer on images that would have highly correlated R/BG histograms as this is the kind of data we selected for our training. At this step of our study, we only evaluate the BG channel generation as the “natural colors” images evaluation comes easier to the human eye (analytic evaluation is already done during the training).

First results are encouraging revealing that color contrasts can be well reproduced forming image products directly usable for mapping. The red vs blue balance of some units is not always perfectly reproduced like the dark capping unit of the image below, redder in the training set while bluish in the results. Our perspective is to play with the training set and to apply the pipeline to RB color.

Closeup sample of recolorized (left) and COLOR (right) version of image ESP_016196_2050

References

[1] McEwen, Alfred S., et al. "Mars reconnaissance orbiter's high resolution imaging science experiment (HiRISE)." Journal of Geophysical Research: Planets 112.E5 (2007).

[2] Mandon, Lucia, et al. "Morphological and spectral diversity of the clay-bearing unit at the ExoMars landing site Oxia Planum." Astrobiology 21.4 (2021): 464-480.

[3] Ronneberger, Olaf, Philipp Fischer, and Thomas Brox. "U-net: Convolutional networks for biomedical image segmentation." Medical image computing and computer-assisted intervention–MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18. Springer international publishing, 2015.

[4] Quantin-Nataf, C., et al. "MarsSI: Martian surface data processing information system." Planetary and Space Science 150 (2018): 157-170.

How to cite: Volat, M., Quantin-Nataf, C., and Millot, C.: HiRISE image colorization with a U-Net deep neural network, EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-969, https://doi.org/10.5194/epsc-dps2025-969, 2025.