EGU2020-19616
https://doi.org/10.5194/egusphere-egu2020-19616
EGU General Assembly 2020
© Author(s) 2021. This work is distributed under
the Creative Commons Attribution 4.0 License.

Down-scaling MODIS operational vegetation products with machine learning and fused gap-free high resolution reflectance data in Google Earth Engine

Álvaro Moreno Martínez1,4, Emma Izquierdo Verdiguier2, Gustau Camps Valls1, Marco Maneta3, Jordi Muñoz Marí1, Nathaniel Robinson5, José E. Adsuara1, Manuel Campos Taberner4, Francisco J. García Haro5, Adrián Pérez Suay1, and Steven W. Running4
Álvaro Moreno Martínez et al.
  • 1University of Valencia, Image Processing Laboratory (IPL), Paterna, Spain
  • 2Institute of Geomatics, University of Natural Resources and Life Sciences (BOKU), Austria
  • 3Department of Geosciences, University of Montana, USA
  • 4Numerical Terradynamic Simulation Group (NTSG), University of Montana, USA
  • 5UV-Environmental Remote Sensing Group (UV-ERS), Universitat de València, Spain

Among Essential Climate Variables (ECVs) for global climate observation, the Leaf Area Index (LAI) and the Fraction of Absorbed Photosynthetically Active Radiation (FAPAR) are the most widely used to study land vegetated surfaces. The NASA’s Moderate  Resolution Imaging Spectro-radiometer (MODIS) is a key instrument aboard the Terra and Aqua platforms and allows to estimate both biophysical variables at coarse resolution (500 m) and global scales. The MODIS operational algorithm to retrieve LAI and FAPAR (MOD15/MYD15/MCD15) uses a physically-based radiative transfer model (RTM) to compute their estimates with corrected surface spectral information content. This algorithm has been heavily validated and compared with field measurements and other sensors but, so far, no equivalent products at high spatial resolution and continental or global scales are routinely produced. 

Here, we introduce and validate a methodology to create a set of high spatial resolution LAI/FAPAR products by learning the MODIS RTM using advanced machine learning approaches and gap filled Landsat surface reflectances. The latter are smoothed and gap-filled by the HIghly Scalable Temporal Adaptive Reflectance Fusion Model (HISTARFM). HISTARTFM has a great potential to improve the original Landsat reflectances by reducing their noise and recovering missing data due to cloud contamination. In addition, HISTARFM runs very fast in cloud computing platforms such as Google Earth Engine (GEE) and provides uncertainty estimates which can be propagated through the models. These estimates allow to compute numerical uncertainties beyond the typical and qualitative control information layers provided in operational products such as the MODIS LAI/FAPAR. The introduced high spatial resolution biophysical products here could be of interest to the users to achieve the needed levels of spatial detail to adequately monitor croplands and heterogeneously vegetated landscapes.

 

How to cite: Moreno Martínez, Á., Izquierdo Verdiguier, E., Camps Valls, G., Maneta, M., Muñoz Marí, J., Robinson, N., Adsuara, J. E., Campos Taberner, M., García Haro, F. J., Pérez Suay, A., and Running, S. W.: Down-scaling MODIS operational vegetation products with machine learning and fused gap-free high resolution reflectance data in Google Earth Engine, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19616, https://doi.org/10.5194/egusphere-egu2020-19616, 2020

Displays

Display file