EGU2020-12492
https://doi.org/10.5194/egusphere-egu2020-12492
EGU General Assembly 2020
© Author(s) 2020. This work is distributed under
the Creative Commons Attribution 4.0 License.

Modeling and Capturing New Phenomena from Very High Cadence Earth Observations

Giovanni Marchisio and Rasmus Houborg
Giovanni Marchisio and Rasmus Houborg
  • Planet, Analytics Engineering, United States of America (giovanni@planet.com)

Planet operates the largest constellation of Earth-observing satellites in human history collecting 1.3 million 29 MP multispectral images over 250 million km2 daily at a resolution of 3-5 meters. This amounts to more than twice the Earth’s total landmass every day and to more than 10 times the area covered by all other commercial and public sources combined, including Sentinel and Landsat, and at a higher resolution. To date we have collected an average of 1,200 images for every point on the surface of the planet. This provides an unparalleled amount of data from which to establish historical baselines and train and refine machine learning algorithms. Intersecting dense time series of global observations with modern deep learning solutions allows us to take a daily pulse of the planet like it has never been done before.

The daily temporal cadence and higher resolution at global scale is unlocking new challenges and opportunities. These range from tracking and discovering previously unknown natural phenomena to improving existing approaches for modeling vegetation phenology and monitoring human impact on the environment. We will provide a brief overview of recent success stories from our university partner ecosystem. For instance, spatio-temporal analytics based on millions of observations has enabled researchers to show that sub-seasonal fluctuations in surface water of Arctic-Boreal can increase carbon emissions and affect global climate to an extent that has eluded traditional satellite remote sensing. The new data source has also enabled intraday measurements of river flows, the first ever measurements of crop water usage and evapotranspiration from space, field level sowing date prediction on a nearly daily basis and improved detection of early-season corn nitrogen stress.

The second part of our presentation covers Planet’s own internal development of spatio-temporal deep learning solutions which target the interaction between geosphere and anthroposphere. Man-made structures such as roads and buildings are among the information layers that we are beginning to extract from our imagery reliably and at a global scale. Our deep learning models, with about seven million parameters, are trained on several billion labeled pixels representative of a wide variety of terrains, densities, land cover types and seasons worldwide. The outcome is a pipeline that has produced the most complete and current map of all the roads and buildings worldwide. It reveals details not available in popular mapping tools, in both industrialized cities and rural settlements. The high temporal cadence of these spatial information feeds increases our confidence in tracking permanent change associated with urbanization and improves our knowledge of how human settlements grow. Applications include tracking urban sprawl at a country level in China, deriving land consumption rates for countries in Sub-Saharan Africa, identifying construction in flood risk zones worldwide, and timely augmentation of OpenStreetMap in disaster management situations that affect developing countries. With continually refreshed imagery from space, such maps can be updated to highlight new changes around the world, opening up new possibilities to improve transparency and help life on Earth.

How to cite: Marchisio, G. and Houborg, R.: Modeling and Capturing New Phenomena from Very High Cadence Earth Observations, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12492, https://doi.org/10.5194/egusphere-egu2020-12492, 2020

Display materials

Display file

Comments on the display material

AC: Author Comment | CC: Community Comment | Report abuse

Display material version 1 – uploaded on 08 May 2020, no comments