Is maximizing spatial resolution worth the computational cost?
- Institute of Geoinformatics (ifgi), WWU Münster, Münster, Germany
Link: https://docs.google.com/document/d/15788dfGPL5ehDaDsO7BsOKoGk3Bk7g2epKQ58HiYZVM/edit
The core of the modern data revolution is data centers: “the central nervous system of the 21st century,” [1] housing networking and storage equipment, and servers that enable services such as cloud computing. They consume increasing quantities of energy not only to run their operations, but also to cool down their servers. With advances in cloud computing and the growth of Internet services use, data centres are estimated to have the fastest growing carbon footprint from across the whole ICT sector.
Although the opportunities and risks of Big Data are often discussed in the geosciences, most of the literature and initiatives surprisingly neglect a crucial risk for sustainable development: the fact that the data revolution hampers sustainable development because of its environmental footprint. Therefore, the ability to quantify and project data centre energy use is a key energy and climate policy priority.
Remote sensing products present one of the highest storage-capacity demands, with imagery archives spanning petabytes. High- and very high-resolution remote sensing imagery has emerged as an important source of data for various geoscientific analysis, most of which are highly computationally taxing. With this trend in increasing spatial and temporal resolution, a crucial question remains - is the accuracy and overall quality of analysis results significantly impacted by substituting the standard high-resolution product with a less computationally-intensive, coarser-resolution one?
Emerging products such as the World Settlement Footprint [2] and Dynamic World [3] land use land cover maps, which are produced at very high temporal resolution (5 day) and spatial resolution (10 m). A generally accepted attitude is that developing products at higher resolutions is a legitimate scientific goal. However, the interest is often not which 10 m pixel changes land use and when exactly this happens, but rather how many pixels change land use over a larger area (a country, or basin) and over a larger time period (e.g. by year over a decade). For a few high resolution products we evaluate and report how such aggregated target quantities computed from lower spatial and temporal resolution data change the quality (accuracy) of the final product, and which resolutions still seem acceptable.
[1] Lucivero, F. Big Data, Big Waste? A Reflection on the Environmental Sustainability of Big Data Initiatives. Sci Eng Ethics 26, 1009–1030 (2020). https://doi.org/10.1007/s11948-019-00171-7
[2] Marconcini, M., Metz-Marconcini, A., Üreyen, S. et al. Outlining where humans live, the World Settlement Footprint 2015. Sci Data 7, 242 (2020). https://doi.org/10.1038/s41597-020-00580-5
[3] Brown, C.F., Brumby, S.P., Guzder-Williams, B. et al. Dynamic World, Near real-time global 10 m land use land cover mapping. Sci Data 9, 251 (2022). https://doi.org/10.1038/s41597-022-01307-4
How to cite: Eid, Y. and Pebesma, E.: Is maximizing spatial resolution worth the computational cost?, EGU General Assembly 2023, Vienna, Austria, 24–28 Apr 2023, EGU23-14915, https://doi.org/10.5194/egusphere-egu23-14915, 2023.