- 1Institute of Geosciences and Georesources, Padova, Italy
- 2INAF-Astronomical Observatory, Padova, Italy
While AI and neural networks automation advance, human interpretation of planetary imagery remains essential for mapping surface features, yet it introduces uncertainty due to variable expertise, fatigue, and ambiguous boundaries. Standardized protocols, best practices, and scalable participation are increasingly important to ensure reproducibility while addressing the growing volume of data. This study examines whether non‑expert individuals, after targeted training, can integrate with or substitute expert researchers in identifying and mapping boulders on the lunar surface, and quantifies where human variability most affects outcomes.
Two high‑resolution Lunar Reconnaissance Orbiter image subsets in Mare Crisium, east of the Luna‑24 landing site and adjacent to a fresh ~1‑km Copernican crater, served as test areas (pixel scale ~0.5 m). An expert benchmark was established by three professional mappers and compared against two participant cohorts: 26 trainees from a winter school focused on planetary geological mapping and 65 amateur astronomers contributing via Zooniverse, a Citizen‑Science web platform. All participants received concise training to independently map two areas with different boulder densities. Detection performance and internal consistency were evaluated as a function of observer-related factors, image features, and boulder size/density, alongside the impact of simple workflow rules designed to reduce ambiguity.
Results reveal observer‑dependent variability, with larger discrepancies in the amateur cohort, particularly in dense fields and for smaller features close to the detection threshold. Agreement is highest on isolated, high‑contrast boulders and declines where shadowing, albedo variations, or overlapping features complicate the interpretation. Short, standardized criteria and targeted examples reduce differences in results between observers, especially among trainees, while improving repeatability within each cohort. Aggregating multiple non‑expert annotations and applying basic quality gates, such as mapped features abundance, produces outputs approaching expert‑level reliability.
Non‑expert contributors, when provided with focused instruction and lightweight quality control, can reliably augment expert efforts in lunar boulder mapping, particularly for routine counting and mapping in simple settings. However, they do not fully substitute experts in ambiguous contexts, where professional judgment remains remarkably better for consistent classification and boundary decisions. These findings support an hybrid approach combining expert‑defined standards, brief training modules, consensus‑based citizen contributions, and standardized workflows to enhance throughput without compromising scientific robustness, reliability, and consistency. More broadly, the structured approach demonstrated here, by combining expert-defined standards, targeted training, and consensus mechanisms, offers a potentially transferable methodological framework for research domains facing similar challenges of graphic data volume and interpretive complexity.
How to cite: Rossato, S., Criscuolo, L., Da Lio, C., Dal Sasso, G., Frasca, G., Rossi, V. M., Vivaldo, G., Zaggia, L., Pajola, M., and Tusberti, F.: Human Factors in Lunar Boulder Mapping: Can Citizen Scientists Support Experts?, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-21837, https://doi.org/10.5194/egusphere-egu26-21837, 2026.