EGU22-4534
https://doi.org/10.5194/egusphere-egu22-4534
EGU General Assembly 2022
© Author(s) 2022. This work is distributed under
the Creative Commons Attribution 4.0 License.

Spatial multi-modality as a way to improve both performance and interpretability of deep learning models to reconstruct phytoplankton time-series in the global ocean

Joana Roussillon1, Jean Littaye1, Ronan Fablet2, Lucas Drumetz2, Thomas Gorgues1, and Elodie Martinez1
Joana Roussillon et al.
  • 1Laboratoire d’Océanographie Physique et Spatiale, CNRS/IFREMER/IRD/UBO, Institut Universitaire Européen de la Mer, Plouzané, France (joana.roussillon@etudiant.univ-brest.fr)
  • 2IMT Atlantique, UMR CNRS LabSTICC, Technopole Brest Iroise, 29239 Brest, France

Phytoplankton plays a key role in the carbon cycle and fuels marine food webs. Its seasonal and interannual variations are relatively well-known at global scale thanks to satellite ocean color observations that have been continuously acquired since 1997. However, the satellite-derived chlorophyll-a concentrations (Chl-a, a proxy of phytoplankton biomass) time series are still too short to investigate phytoplankton biomass low-frequency variability. Machine learning models such as support vector regression (SVR) or multi-layer perceptron (MLP) have recently proven to be an alternative approach to mechanistic ones to reconstruct Chl-a past signals (including periods before the satellite era) from physical predictors, but they remain unsatisfactory. In particular, the relationships between phytoplankton and its physical surrounding environment are not homogeneous in space, and training such models over the entire globe does not allow them to capture these regional specificities. Moreover, if the global ocean is commonly partitioned into biogeochemical provinces into which phytoplankton growth is supposed to be governed by similar processes, their time-evolving nature makes it difficult to impose a priori spatial constraints to restrict the learning phase on specific areas. Here, we propose to overcome this limitation by introducing spatial multi-modalities into a convolutional neural network (CNN). The latter can learn with no particular supervision several spatially weighted modes of variability. Each of them is associated with a CNN submodel trained in parallel, standing for a mode-specific response of phytoplankton biomass to the physical forcing. Beyond improving performance reconstruction, we will show that the learned spatial modes appear physically consistent and may help to get new insights into physical-biogeochemical processes controlling phytoplankton repartition at global scale.

How to cite: Roussillon, J., Littaye, J., Fablet, R., Drumetz, L., Gorgues, T., and Martinez, E.: Spatial multi-modality as a way to improve both performance and interpretability of deep learning models to reconstruct phytoplankton time-series in the global ocean, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4534, https://doi.org/10.5194/egusphere-egu22-4534, 2022.

Displays

Display file