MITM6 | Robotic Concepts in Astronomy and Planetary Exploration

MITM6

Robotic Concepts in Astronomy and Planetary Exploration
Convener: Alberto Castro-Tirado | Co-conveners: Maria Gritsevich, Stephan Ulamec, Carlos Jesús Pérez del Pulgar Manceb
Orals MON-OB2
| Mon, 08 Sep, 09:30–10:30 (EEST)
 
Room Mercury (Veranda 4)
Posters MON-POS
| Attendance Mon, 08 Sep, 18:00–19:30 (EEST) | Display Mon, 08 Sep, 08:30–19:30
 
Finlandia Hall foyer, F90–91
Mon, 09:30
Mon, 18:00
Robotic systems have become essential tools in missions that extend beyond human physical reach or require rapid response times. These systems play a pivotal role in planetary exploration, deep-space observation, and ground-based astronomy, ensuring swift and precise responses to transient astronomical events. Robotic platforms have revolutionized our understanding of the universe, enabling the exploration of remote environments and delivering unprecedented precision in observations.

This session focuses on advancements in robotic concepts and their applications in planetary science and astronomy.

Topics of interest include, but are not limited to:

Autonomous robotic systems for planetary exploration, including rovers, landers, and aerial platforms.
Orbit selection strategies for space-based observations.
Instrumentation designed for robotic observatories, both in orbit and on the ground.
Networks of robotic telescopes for coordinated astronomical observations.
Integration of artificial intelligence and machine learning to enhance autonomy and efficiency.
Challenges and solutions in deploying and operating robotic systems in extreme environments.
Collaborative robotic systems for multi-platform, synchronized observations.
New concepts, case studies, and lessons learned from past, ongoing, or envisioned space missions, emphasizing their scientific contributions and value.

To highlight the innovations driving progress in planetary science and astronomy, the session invites contributions from scientists, engineers, and mission planners. By bringing together experts from diverse disciplines, we aim to foster discussions that inspire and shape the next generation of robotic missions.

Session assets

Orals: Mon, 8 Sep, 09:30–10:30 | Room Mercury (Veranda 4)

09:30–09:42
|
EPSC-DPS2025-740
|
ECP
|
On-site presentation
Iosto Fodde, Lucia Francesca Civati, Alessia Cremasco, and Fabio Ferrari

Interactions between spacecrafts and the surfaces of small solar-system bodies (SSB), e.g. through landing or sampling, are incredibly valuable in terms of scientific return as the spacecraft-surface interaction provides direct information on the internal structure and material properties of the SSB while their instruments can do some in-situ measurements to characterize the SSB in more depth. Many missions have attempted, or are planning to attempt, interactions with the surface of a SSB. For example, the upcoming Hera mission will attempt to land its two CubeSats on the surface of asteroids Didymos and Dimorphos, while JAXA's Martian Moons eXploration (MMX) will deploy a rover on the surface of the Martian moon Phobos. One of the main difficulties for these missions is the uncertainty related to the surface properties of SSBs. A large number of SSBs are thought to have a large amount of granular material, also known as regolith, and boulders on their surface [1]. The interactions between the individual regolith particles, the boulders, and the spacecraft itself are governed by complex contact forces and are hard to model on Earth due to the low-gravity environment on these SSBs. 

There are two main methods for numerical modelling the surface of SSBs: the hard and soft surface method. First, the hard surface methods use a small amount of parameters that abstract away the different effects of the internal interactions happening within the regolith when in contact with the spacecraft. On the other hand, soft surface methods model the regolith using a large number of discrete bodies and simulate fully their interactions. This increases the fidelity of the model greatly and works well when trying to estimate surface properties from observing the landing, but drastically increases the numerical complexity.
 
In this work, a novel approach is proposed which aims to generate an efficient model of the spacecraft-surface interaction like the hard surface models, but which retain the accuracy of the soft surface model. This is achieved using a data-driven approach where a small number of soft surface simulations are used to fit a model that can be more efficiently used. This model is created by a non-intrusive polynomial chaos expansion (PCE) method, which samples the initial conditions and system parameters in an optimal way, and takes the input-output relationship of these sample simulations to create a fit based on a family of orthogonal polynomials that represent the prior distribution of the input variables. 
 
A discrete element modelling (DEM) technique is used to simulate the motion of the surface particles, the spacecraft, and their interaction. In this work, specifically the GRAINS software developed in [2] is used. A smooth contact model is chosen as this works best for cases where continuous contacts are present, as is the case for the settled surface. An example simulation of a spacecraft lander interacting with the surface is presented in Figure (1), where a constant acceleration of 0.5e-4 m/s^2 is used, representing roughly the surface acceleration of bodies like Didymos and Apophis. 
 
Figure 1: Example landing simulation.
 
Using the DEM simulation environment, the PCE model can be generated as mentioned before. There are various properties of the PCE model that can be utilised. First, the PCE model allows for an analytical formulation of the moments of the output distribution through the orthogonal property of the polynomial basis. Using the analytical description of the variance, the first order Sobol indices can be calculated as well. These indices measure the expected reduction in the variance of the output  when fixing the input parameter. Thus, they also measure the contribution of the variance of an input parameter to the output parameter, allowing for a sensitivity analysis to determine which parameters are critical for the interaction. The other important property of the PCE model is that it is efficient to sample. This allows for the use of methods like Markov Chain Monte Carlo (MCMC) to perform inverse modelling, which requires sampling the simulation many times to estimate the probability distribution of the input parameters given a set of output observations. Where the DEM simulation would take hours to perform a single simulation, the PCE surrogate model is much more efficient. An example results is given in Figure (2), where a landing was simulated similar to Figure (1), and the PCE-MCMC method was used to estimate the probability distribution of the value of the surface properties given a certain observed outgoing velocity of the lander.
 
Figure 2: Estimated distribution of surface properties from an observed landing using the PCE-MCMC method. The solid lines are the true values.
 
Concluding, this work will show how to create an efficient and accurate model for the interaction between the surface of an SSB and a spacecraft. This model can be used to interpret the sensitivity of the post-contact dynamics of the spacecraft to uncertainties in the surface properties. It can then also be used to perform reliability analyses where the chance of failure after interaction is calculated as a function of the uncertainty in the surface properties. Finally it allows for estimating the actual surface properties from the interaction itself. This methodology will thus enable mission designers to perform robust and efficient interactions between the spacecraft and the surface of SSBs, creating novel science and exploration opportunities.
 
Acknowledgements:
Funded by the European Union (ERC, TRACES, 101077758). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.
 
References:

[1] N. Murdoch, M. Drilleau, C. Sunday, F. Thuillet, A. Wilhelm, G. Nguyen, and Y. Gourinat, “Low-velocity impacts into granular material: application to small-body landing,” Monthly Notices of the Royal Astronomical Society, Vol. 503, 4 2021, pp. 3460–3471, 10.1093/MN-RAS/STAB624.

[2] F. Ferrari, A. Tasora, P. Masarati, and M. Lavagna, “N-body gravitational and contact dynamics for asteroid aggregation,” Multibody System Dynamics, Vol. 39, 1 2017, pp. 3–20, 10.1007/S11044-016-9547-2/FIGURES/9.

How to cite: Fodde, I., Civati, L. F., Cremasco, A., and Ferrari, F.: Modelling Spacecraft-Surface Interactions in Low-Gravity Environments for Inferring Surface Mechanical Properties, EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-740, https://doi.org/10.5194/epsc-dps2025-740, 2025.

09:42–09:54
|
EPSC-DPS2025-1038
|
On-site presentation
Franck Marchis, Thomas M. Esposito, Josef Hanuš, Petr Pokorny, Ryan Lambert, Stuart Pilorz, Ariel Graykowski, and Lauren Sgro

Introduction

SkyMapper is a pioneering platform that leverages a global network of telescopes equipped with imagers and all-sky cameras, capturing high-quality images as the primary data source. These images are used to extract photometry and astrometry, providing precise measurements of astronomical targets. By focusing solely on image-based data, SkyMapper ensures a streamlined and robust data processing pipeline, building on proven methodologies developed for Unistellar observations.

SkyMapper is a Decentralized Science (DeSci) and Decentralized Physical Infrastructure Network (DePIN) project aiming to revolutionize how we conduct and access planetary science research through a globally distributed network of telescopes.

This approach unlocks a diverse range of scientific cases, from tracking potentially hazardous asteroids and studying cometary activity, to detecting exoplanet transits and observing rare human-made events like the DART impact. Planetary science, in particular, benefits from this comprehensive and continuous observational strategy, addressing key challenges in the field.

With a phased growth strategy, SkyMapper aims to democratize access to high-quality astronomical data and foster global collaboration in planetary science.

Methodology

SkyMapper’s approach begins by leveraging the existing Unistellar Network of smart telescopes, which are already deployed globally and operated by citizen scientists (Marchis et al. Acta Astronautica, 2020 and Peluso et al. PASP 135, 2024,). This initial network provides a robust foundation for data collection and immediate follow-up observations.

As the project progresses, SkyMapper will expand to integrate a broader array of smart, digital, robotic telescopes and underutilized professional telescopes in the 1–2 meter class. The platform uses the SkyGate device to connect telescopes to its decentralized system. It also incorporates blockchain technology to ensure data integrity and AI algorithms (e.g ODNET by Pokorny et al. EPSC 2025 for occultation detection) for efficient real-time data analysis. This strategy enables high-quality, coordinated observations that are scalable and accessible.

Applications in Planetary Science

SkyMapper supports multiple scientific investigations in planetary science:

1. Follow-up Observations of Potentially Hazardous Asteroids (PHAs): Enables coordinated campaigns to refine orbits and assess impact risks. (Lambert et al., MPB, 50, 2023)
2. Occultation Observations: Measures asteroid sizes, shapes, and topographic features during stellar occultations. (Hanuš et al., EPSC2024-591)
3. Cometary Variability: Tracks brightness changes to detect outbursts and disruptions, informing models of dust and gas activity. (Graykowski et al., AGU, MP23, 2024)
4. Exoplanet Transit Observations: Confirms planet candidates from NASA’s TESS mission with small-telescope observations (Sgro et al. AJ, 168,2024)
5. Observation of the Human-made spacecraft activity. Demonstrated in 2022 with the live recording of the NASA DART from the southern part of Africa   (Graykowski et al., Nature, 2023) and JWST launch and deployment on its way to the Lagrange Point (Lambert et al., SPIE, 2022).

Results and Preliminary Findings

SkyMapper is currently building its network, beginning with Unistellar telescopes and the development of the SkyGate device. The MVP phase, launching in June, will connect 10–50 telescopes. By mid-2026, the network aims to exceed 1,000 telescopes globally. This scalable model includes future integration of additional networks, including digital telescopes and all-sky cameras, as well as professional-grade telescopes over the next two years, transforming SkyMapper into a key facility for planetary science research.

Future Directions

SkyMapper will continue to expand its network, integrate AI-powered data analysis, and build new scientific programs. These will include the detection of occultations by satellites of giant planets, alert systems for PHAs, meteor and bolide reentries, and broader collaborations with space agencies and research institutions worldwide. The platform also aims to support education and outreach, offering global access to astronomical data with a special focus on underserved communities and countries building an astronomy workforce.

How to cite: Marchis, F., Esposito, T. M., Hanuš, J., Pokorny, P., Lambert, R., Pilorz, S., Graykowski, A., and Sgro, L.: SkyMapper: A Decentralized Platform for Enhancing Planetary Science through Global Telescopic Networks, EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-1038, https://doi.org/10.5194/epsc-dps2025-1038, 2025.

09:54–10:06
|
EPSC-DPS2025-1284
|
ECP
|
On-site presentation
Fabian Buse, Roman Holderried, Sandra Lagabarre, Naomi Murdoch, Juliane Skibbe, Michal Smisek, Simon Tardivel, and Pierre Vernazza

IDEFIX is a surface rover developed by CNES and DLR as part of JAXA's Martian Moons eXploration (MMX) mission. Designed to explore the surface of Phobos, it is the first wheeled robotic system intended to operate in a milli-gravity environment [1]. The rover's mobility system is central to its ability to conduct scientific investigations and navigate safely across uncertain and variable terrain. The rover hardware is already completed and awaits its launch towards Phobos in 2026.

Figure 1: IDEFIX in its flight configuration awaiting shipment to Japan.

The rover carries four instruments: a set of two navigation cameras (NavCam), a set of two cameras designed to analyze the interaction between the wheels and regolith (WheelCams) [2], a Raman spectrometer (RAX) analyzing the surface's mineralogical composition, and a thermal mapper (MiniRAD) [1].

To realize the rover's mobility, it brings five key contributors:

  • Its locomotion system enables mobility by allowing the rover to move across the surface and adjust its orientation.
  • Its Navigation cameras provide the visual information needed for onboard or ground-based navigation.
  • Its wheel cameras provide crucial insights into the wheel regolith interaction, enabling the rover team to asses safe driving strategies.
  • It has two autonomous onboard navigation systems, one developed by CNES and one by DLR, enabling the rover to undertake longer drives.
  • Its attitude control system (SKA) provides absolute attitude information to ground and board tools and autonomously commands the rover to align itself toward an optimal charging orientation.

The locomotion system [3] is driven by the requirement to aid in the rover's initial uprighting. Thus, it unfolds and refolds its legs to orient the rover from any initial orientation after landing on the surface towards its belly. This requirement leads to the design of individually actuated legs with independently driven wheels. The locomotion system must rely on skid steering. The wheels are designed to operate on even extremely soft regolith while providing controllability on a more rigid regolith. With limited knowledge of the regolith behavior on Phobos, no specific conditions were assumed during the system's design. Instead, it is designed to operate under a wide range of conditions. Three locomotion system functions are described below; they are key in operating the rover.

DRIVE executes a skid steer driving based on a commanded distance and heading angle change. In addition to these base arguments, traction can be shifted between its front and rear wheels, as this may be required to suppress a wheely-like behavior due to too high traction on the rear wheels.

INCHING realizes an inch-worm-like movement. Designed to traverse especially soft or steep passages.

ALIGN changes the rover chassis orientation and height while suppressing longitudinal movements. This command allows the SKA [4] system to safely realign the rover towards the sun.

The rover's position and abilities in its environment are reconstructed on-ground based on:

  • camera images: local and regional terrain models, local terrain identification
  • 3-axis estimation computed by the attitude control system and the locomotion position: full rover pose estimation
  • locomotion system telemetry, foremost leg and wheel angles.

Besides, the behavior will be analyzed by comparing expected and real movements to update movement rules and improve mobility planning [5].

Moving the rover on Phobos only through ground operation limits mobility to short explicit movements that must be manually confirmed. Thus, this mode of operation will limit mobility efficiency to short distances.

Both navigation systems [5,6] aim to extend this limitation by providing a function to give higher-level commands, so changing the given command from "drive 0.5m and turn 15°" to "go to position [155, 188] in the mission frame (about 15 m ahead) with orientation 30°". There are several differences here: the first will only be executed in an open-loop, while the latter is executed over multiple days, establishes a path along the way to avoid hazardous areas, is controlled in closed-loop manner, and ensures the rover reaches its goal with some accuracy. Further, both systems provide functions to ensure the rover's safety, e.g., monitoring effects like slippage and aborting when a threshold is exceeded.

Figure 2: (left) Prototypix, IDEFIX's earthbound validation and testing platform. (right) IDEFIX in simulation.

Multiple means are available to test, validate, and train IDEFIX's mobility, from a simulation dedicated to only locomotion to dynamic simulation coupled in a hardware-in-the-loop manner connecting a flight representative onboard computer running the flight software [7], see Figure 2 (right).

Testing mobility with a physical rover is limited as the effects of the difference in gravity are severe. First, the locomotion system is not designed to operate in Earth's gravity. While this issue can be solved using a gravity offloading system, the effects on regolith cannot be correctly replicated. Thus, tests with a complete rover system are limited to qualitative verification and some training aspects, see Figure 2 (left).

The IDEFIX rover's mobility systems are designed to address the challenges of operating in Phobos' milli-gravity environment. Our efforts aim to ensure safe, efficient mobility and maximize the scientific return of the MMX mission.

[1] ULAMEC, Stephan, et al. Science objectives of the MMX rover. Acta Astronautica, 2023.

[2] Murdoch N. et al. The WheelCams on the IDEFIX rover, Submitted to PEPS."

[3] BARTHELMES, Stefan, et al. MMX rover locomotion subsystem-development and testing towards the flight model. In: 2022 IEEE Aerospace Conference, AERO 2022. IEEE, 2022.

[4] LAGABARRE, Sandra, et al. Design of the MMX Rover Attitude Control System for Autonomous Power Supply. ESA GNC-ICATT 2023, 2023.

[5] BUSE, Fabian, et al. Mobility on the Surface of Phobos for the MMX Rover-Simulation-aided Movement planning. In: 17th ASTRA. 2023.

[6] VAYUGUNDLA, Mallikarjuna, et al. The MMX rover on Phobos: The preliminary design of the DLR autonomous navigation experiment. In: 2021 IEEE Aerospace Conference. IEEE, 2021.

[7] BUSE, Fabian, et al. MMX Rover simulation-robotic simulations for Phobos operations. In: 2022 IEEE Aerospace Conference. IEEE, 2022.

How to cite: Buse, F., Holderried, R., Lagabarre, S., Murdoch, N., Skibbe, J., Smisek, M., Tardivel, S., and Vernazza, P.: Mobility Design and Challenges of the IDEFIX Rover on the Martian Moon Phobos, EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-1284, https://doi.org/10.5194/epsc-dps2025-1284, 2025.

10:06–10:18
|
EPSC-DPS2025-1658
|
On-site presentation
The role of robotics in planetary exploration, contributions from the university of malaga
(withdrawn after no-show)
Carlos Jesús Pérez del Pulgar Manceb and David Rodriguez Martinez
10:18–10:30
|
EPSC-DPS2025-1637
|
ECP
|
On-site presentation
Paul Eckstein, Adam McSweeney, Ishuwa Sikaneta, Niels Holtgrefe, Max Bannach, Ines Belgacem, Holly Raynor, Bernhard Geiger, Björn Grieger, Arnaud Mahieux, Gerard Gallardo i Peres, Anne Grete Straume-Lindner, Jayne Lefort, and Thomas Voirin

Context: Due to data return limitations, ESA’s EnVision mission to Venus targets the repeated Synthetic Aperture Radar (SAR) observation at 30 m resolution of roughly 30 % of the surface. The available downlink data volume varies throughout the mission as a function of Earth-Venus distance and visibility, so SAR observations need to be scheduled under spatial (target visibility on the Venus surface) and time (downlink capacity) constraints.The EnVision science orbit around Venus is near-polar, and science operations phase covers 6 Venus cycles, roughly 4 Earth years [1]. This yields up to 6 imaging opportunities of a typical surface target, allowing pairs and triplets of observations in different cycles under suitable conditions to be processed as stereo imagery or used for the detection of changes, respectively.

Aims: To develop the SAR-Combinatorial Analysis Synergy Tool (SAR-CAST): an automated strategy optimisation tool as part of the EnVision mission performance toolkit, to support the EnVision observation planning. Specifically, the tool shall group SAR observation budget units into Repeat Observation Combinations (ROCs such as stereo pairs or change detection triplets, optionally including a polarimetry observation), within mission, instrument and spacecraft constraints, while optimising a user-defined objective function considering both scientific value and engineering goals. These ROCs describe a combination of observations to make while the spacecraft orbits above a given band of longitudes. For a reference mission data budget, the tool generates a ROC plan that consumes this budget. The individual ROCs in the plan can be allocated to specific surface targets in a planning step downstream of SAR-CAST.

Method: A visual representation of these ROCs was developed to describe the budget allocation problem in a generic and intuitive way. Based on this representation, the optimisation problem can be translated into a mixed-integer linear program (MILP). Established tools like SCIP [2] allow finding the optimal solution if there is one. If the constraints cannot be satisfied within the given budget, we use the “Big M” to at least provide approximate, feasible solutions. For use of SAR-CAST as an interactive, explorative tool, marimo, a dataflow-based Python framework [3], is used to enable lazy recomputation of the MILP coefficients.

Results: ROC plans can be generated interactively or through a command line interface. Given a budget, the problem set-up takes around 15 minutes for a typical problem size of hundreds of constraints and thousands of variables, after which modification of most parameters requires updates to a subset of the constraints or coefficients only. This enables best-case recomputation delays as fast as 15 seconds, allowing near-real time interactive exploration of the parameter’s impact. Based on a reference scenario (not necessarily reflecting the mission which will be flown), coverage requirement performance equals or exceeds previous manual solution. Parameters related to constraints, their relaxation, and the objective function can be tweaked by the user to balance science and engineering needs.

Conclusions: With the SAR-CAST tool we have achieved automated optimisation of the SAR observation synergies at an abstract level, to support the creation of the science activity plan by the EnVision Science Working Team (SWT). In the future, the command-line interface of SAR-CAST would allow its integration into an automated science operations planning pipeline, and could exploit the lazy recomputation for sensitivity studies. Due to the large number of constraints and variables, the problem set-up takes significantly more time than the solving step (if an optimal solution can be found). Potential for improving efficiency in the variable and constraint creation has been identified. This could allow for even larger or more detailed versions of the problem to be solved in manageable timeframes. 

 

REFERENCES: 

[1] ESA. EnVision, Understanding why Earth’s closest neighbour is so different, ESA Definition Study Report, 2023, https://www.cosmos.esa.int/web/envision/links

[2] Bolusani, Suresh, et al. The SCIP Optimization Suite 9.0. 25734, Optimization Online, 26 Feb. 2024. optimization-online.org, https://optimization-online.org/?p=25734.

[3] Agrawal, Akshay, and Myles Scolnick. Marimo - an Open-Source Reactive Notebook for Python. Aug. 2023, https://github.com/marimo-team/marimo

How to cite: Eckstein, P., McSweeney, A., Sikaneta, I., Holtgrefe, N., Bannach, M., Belgacem, I., Raynor, H., Geiger, B., Grieger, B., Mahieux, A., Gallardo i Peres, G., Straume-Lindner, A. G., Lefort, J., and Voirin, T.: Optimising the Synthetic Aperture Radar Repeat Imaging Strategy for the EnVision Mission, EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-1637, https://doi.org/10.5194/epsc-dps2025-1637, 2025.

Posters: Mon, 8 Sep, 18:00–19:30 | Finlandia Hall foyer

Display time: Mon, 8 Sep, 08:30–19:30
F90
|
EPSC-DPS2025-207
|
On-site presentation
Yoshiko Ogawa, Makiko Ohtake, Ryuhei Yamada, Yuichi Yaguchi, Hirohide Demura, Keiko Yamamoto, Zixian Yang, Alaeddin Nassani, and Takamasa Suzuki

Introduction

We are developing a ground demonstration field for lunar and planetary exploration in Fukushima Prefecture, Japan. This initiative is part of the Moon/Mars Garden (Hakoniwa) Project, led by the University of Aizu (UoA). Specifically, we are constructing a lunar analog environment at the Fukushima Robot Test Field (RTF). This study focuses on building a communication system infrastructure to support robotic teleoperation. Our aim is to address key challenges and contribute to solutions for deploying and operating robotic systems in extreme environments, such as those encountered during lunar and planetary missions.

Methods

To enable remote operation of robots at the Fukushima Robot Test Field (RTF), we have developed a relay-based communication system connecting three primary sites: RTF, the University of Aizu, and remote user locations. This system allows robots deployed at RTF to be operated and tested by users at distant locations. The communication architecture emulates actual mission scenarios—such as multi-robot operations on the Moon relayed through the Lunar Gateway to Earth-based mission control centers.

We have integrated a communication emulator into the relay system to simulate Earth–Moon and other space communication environments. The system allows for the controlled insertion of artificial delays, packet loss, and noise. It is designed to support multi-user and multi-robot operations, enabling each robot to be tested under different communication conditions independently.

In parallel, we are developing a 3D telepresence visualization system to enhance robotic teleoperation. As an initial step, we have reconstructed a 3D model of the test environment at RTF using data acquired from cameras and LiDAR scanners. We evaluated multiple reconstruction methods, including 3D Gaussian Splatting and Neural Radiance Fields (NeRF), and incorporated the results into a virtual reality (VR) interface. While these processes are relatively fast, they do not yet achieve real-time performance. Therefore, we mounted a 360-degree camera on a reference rover, streamed the video to remote users, and transformed it into real-time VR content using Unity.

Experiments

We conducted a series of experiments to evaluate the remote operation of rovers at RTF, specifically around a newly created lunar analog crater (approximately 22 meters in diameter and over 2 meters in depth). A Starlink network was deployed to provide full wireless coverage of the crater area. Multiple rovers equipped with onboard sensors were operated remotely, and the relay-based communication system was validated under various artificial delay conditions, with each delay independently controlled by remote operators. These experiments were also demonstrated during semi-public events, allowing registered attendees to experience remote robotic teleoperation.

Regarding the visualization system, we successfully reconstructed a high-resolution 3D model of the lunar analog crater. Among the various methods tested, the fusion of depth camera and LiDAR data produced the most accurate results, although further quantitative evaluation is ongoing. We also streamed live 360-degree video from the rover to remote users through the communication relay server at UoA. Remote users viewed the scene in real-time using Meta Quest 3 VR headsets, significantly enhancing their situational awareness and operational control.

Discussion and Summary

The experiments conducted at the RTF's lunar analog crater demonstrated the feasibility and effectiveness of our communication and visualization systems for robotic teleoperation. We successfully established a communication infrastructure capable of supporting remote operations under simulated lunar conditions.

As part of future development, we are working to enhance system security through multi-session support and to upgrade the communication emulator to allow packet-level delay control. Additionally, we are exploring communication system designs for actual lunar applications, including the use of Low Power Wide Area (LPWA) technologies to enable reliable communication on the Moon. Such communication may be made feasible through coordinated multi-robot operations, with each robot equipped with a LoRa module.

The 3D visualization system plays a critical role in facilitating accurate and intuitive teleoperation. In actual missions, such systems are expected to support path planning, obstacle avoidance, and spatial awareness. We are currently investigating methods to generate 3D models in quasi-real time, with the ultimate goal of to enable remote operators to control robotic systems as if physically present at the test site.

At RTF, a realistic lunar analog crater with regolith-like sand has been constructed, which serves as the deployment site for this infrastructure. We plan to open the facility to potential users, providing a practical testbed for robotic experiments under realistic lunar analog conditions.

Acknowledgement

This research is supported by a subsidy from Fukushima Prefecture, Japan, and is being conducted by the University of Aizu in collaboration with fellow researchers and Japanese partner companies.

How to cite: Ogawa, Y., Ohtake, M., Yamada, R., Yaguchi, Y., Demura, H., Yamamoto, K., Yang, Z., Nassani, A., and Suzuki, T.: Development of a Robotic Teleoperation, Communication, and Visualization System in a Ground Testbed for Lunar and Planetary Exploration, EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-207, https://doi.org/10.5194/epsc-dps2025-207, 2025.

F91
|
EPSC-DPS2025-1736
|
ECP
|
On-site presentation
Gemma Domènech Rams, Kike Herrero, Pere Gil, David Baroch, and Francesc Domene

There is an increasing need for systematic and coordinated ground-based follow-up observations to support space missions and the next generation of large ground-based surveys. The rapid reaction and flexible, remote scheduling capabilities of robotic facilities makes them indispensable tools for the scientific exploration of transient, time domain and solar system topics, all of which rely on a swift response to either transient alerts or unattended long-term monitoring programs.


The Montsec Observatory (OdM) is a research and technological infrastructure located in Sant Esteve de la Sarga (Lleida) and managed by the Institute of Space Studies of Catalonia (IEEC). The OdM currently hosts two astronomical telescopes: the Joan Oró Telescope (TJO), operated and managed by IEEC; and the TFRM, belonging to the Reial Acadèmia de Ciències i Arts de Barcelona (RACAB) and the Real Observatorio de la Armada (ROA). Both telescopes are devoted to multi-purpose astrophysical observations, as well as Space Surveillance and Tracking (SST) activities.


The TJO is a fully-robotic 0.8-meter F/9.6 Ritchey-Chrétien optical telescope, equipped with two different instruments: an imaging camera (LAIA) and a spectrograph (ARES), which can be used interchangeably throughout any single night. Since the beginning of its operations in 2009, IEEC has been responsible for developing the software (such as the control system, the user interfaces and the intelligent scheduler, including transient alerts) and hardware technologies (such as the electronics and environmental control and monitoring system) that allow the TJO to operate in a fully unattended manner. The TJO is open to the scientific community, and supports a wide range of observational astronomy programs, as well as SST activities. These scientific programs include (but are not limited to) the study of exoplanets, variable stars, active galactic nuclei, and transient phenomena like gamma-ray bursts or supernovae; the TJO is also regularly providing support for space missions like Gaia, JWST, Ariel, or Einstein Probe.


Adding to this, we present a new facility in that is currently being commissioned and will be managed by IEEC at the Montsec Observatory, in collaboration with the Port d’Informació Científica (PIC) infrastructure (operated by IFAE), and the i2CAT Foundation. The New Wide-Field Fast Telescope (Telescopi Ràpid d’Ample Camp; TRAC) is a 0.4-meter F/2.4 telescope designed for high-speed, wide-field imaging, with a fast Direct Drive mount capable of slewing at 50 deg/s. The TRAC is equipped with a large-format CMOS camera with high resolution and fast cadence capabilities, as well as a back-focus capacity to host future additional instruments (such as optical communications systems).


The TRAC complements and expands the applications of the TJO: its scientific cases are related to application fields which cannot be typically covered with the TJO due to technical constraints: this includes follow-up of Near-Earth asteroids, fast transient phenomena (such as stellar occultations by asteroids) and astrophysics related to low surface brightness imaging. Part of the telescope time intended for these uses will be reserved for participation
in European programs for Planetary Defence, including tracking of LEO objects, and the rest will be open to the community, through the same user interface system offered with the TJO and TFRM telescopes.


Behind these efforts is a framework of efficient robotic and autonomous operations and open data practices. The large amount of data expected by the telescope’s instrumentation will be managed through the resources of PIC (IFAE) to ensure its correct storage and distribution. Upcoming telescopes will benefit from the software and interfaces already developed by IEEC for the TJO for the scheduling (ISROCS), operations (OCS), data reduction (ICAT), user interface (MUR) and will also make use of the Weather Control System taking advantage of the multiple and redundant weather sensors already operational at the OdM. By developing and managing open, multi-purpose facilities, we support a wide range of scientific cases and space technology demonstrations, contributing to both observational astronomy and the development of autonomous instrumentation.

How to cite: Domènech Rams, G., Herrero, K., Gil, P., Baroch, D., and Domene, F.: Robotic Operations at the Montsec Observatory: current and upcoming facilities , EPSC-DPS Joint Meeting 2025, Helsinki, Finland, 7–13 Sep 2025, EPSC-DPS2025-1736, https://doi.org/10.5194/epsc-dps2025-1736, 2025.