EGU26-9080, updated on 14 Mar 2026
https://doi.org/10.5194/egusphere-egu26-9080
EGU General Assembly 2026
© Author(s) 2026. This work is distributed under
the Creative Commons Attribution 4.0 License.
Poster | Tuesday, 05 May, 14:00–15:45 (CEST), Display time Tuesday, 05 May, 14:00–18:00
 
Hall X4, X4.59
Towards Prompt-Free Segmentation of Mariculture Facilities Using an AutoPrompt-Enhanced Segment Anything Model
Yunci Xu and Lizhen Lu
Yunci Xu and Lizhen Lu
  • Zhejiang University, School of Earth Science, China

Accurate and scalable mapping of mariculture facilities is essential for coastal resource management, environmental monitoring, and sustainable aquaculture development. However, existing remote sensing–based segmentation approaches heavily rely on large amounts of annotated data or manual interaction, limiting their scalability and generalization. Recently, foundation models such as the Segment Anything Model (SAM) have demonstrated strong generalization ability across diverse visual domains. Nevertheless, SAM’s performance in remote sensing applications remains constrained by its reliance on manually selected prompts, which is impractical for large-scale or automated mapping tasks.

In this study, we propose an AutoPrompt-enhanced SAM framework (AutoPrompt-SAM) for the automated segmentation of mariculture facilities, specifically floating rafts and net cages, from high-resolution PlanetScope imagery. The proposed framework eliminates the need for human-provided prompts by introducing an AutoPrompt module that automatically generates high-quality point prompts for SAM, enabling prompt-free semantic segmentation in a fully automated manner.

As a foundation for this work, we construct a large-scale, high-quality mariculture facility segmentation dataset consisting of more than 1,000 manually annotated PlanetScope image patches with a spatial resolution of 3 m. Each sample is cropped to 256 × 256 pixels and includes pixel-level labels for floating rafts, net cages, and background. To the best of our knowledge, this dataset represents one of the first publicly usable high-resolution semantic segmentation benchmarks for mariculture facilities based on PlanetScope imagery.

The proposed AutoPrompt module learns to generate representative prompt points directly from image features, without requiring any human interaction during inference. These automatically generated prompts are then fed into SAM to produce segmentation masks. By leveraging SAM’s powerful pre-trained visual representations, our method effectively combines the generalization capability of foundation models with task-specific structural cues learned by the AutoPrompt module. Experimental results demonstrate that AutoPrompt-SAM achieves competitive performance compared with manually prompted SAM, while completely removing the need for human intervention.

Beyond mariculture mapping, we further investigate the transferability of the proposed framework. Without additional labeled data, AutoPrompt-SAM shows strong generalization performance when applied to other remote sensing segmentation scenarios, indicating that the learned prompt generation strategy captures transferable spatial and structural patterns. This highlights the potential of AutoPrompt-SAM as a label-efficient and domain-adaptive segmentation framework, capable of extending SAM to broader remote sensing applications.

Overall, this work makes three key contributions: (1) the construction of a large-scale, high-resolution PlanetScope mariculture facility segmentation dataset; (2) the proposal of an AutoPrompt-driven SAM framework that enables fully automated, prompt-free semantic segmentation while effectively exploiting SAM’s pre-trained knowledge; and (3) a demonstration of the framework’s strong transferability, offering a new pathway for reducing human intervention and annotation dependency in remote sensing segmentation tasks. The proposed approach provides a practical solution for adapting foundation models to large-scale Earth observation applications and paves the way toward more autonomous and scalable remote sensing analysis.

How to cite: Xu, Y. and Lu, L.: Towards Prompt-Free Segmentation of Mariculture Facilities Using an AutoPrompt-Enhanced Segment Anything Model, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-9080, https://doi.org/10.5194/egusphere-egu26-9080, 2026.