Displays

ESSI2.10

Earth, space, and environmental scientists are pushing the boundaries of human understanding of our complex world. They seek and use larger and more varied data in their research with a growing need for data integration and synthesis across and among scientific domains. Tools, services, and data skills are critical resources to the research ecosystem in order to enable the harmonization and integration of data with different temporal and spatial ranges. This session explores the challenges, successes, and best practices the data community has for using data from multiple sources and scientific domains with unfamiliar formats, vocabularies, quality, and uncertainty, or in providing support and services for accessing these data. We seek submissions from the community of data producers, enablers, researchers, and users on methods for identifying and communicating best practices, challenges in this diverse data environment, and for building critical data skills related to data integration and data management.

Share:
Co-sponsored by AGU
Convener: Shelley Stall | Co-conveners: Nancy Ritchey, Philip Kershaw
Displays
| Attendance Wed, 06 May, 08:30–10:15 (CEST)

Files for download

Download all presentations (71MB)

Chat time: Wednesday, 6 May 2020, 08:30–10:15

D815 |
EGU2020-11709
Eugene Burger, Benjamin Pfeil, Kevin O'Brien, Linus Kamb, Steve Jones, and Karl Smith

Data assembly in support of global data products, such as GLODAP, and submission of data to national data centers to support long-term preservation, demands significant effort. This is in addition to the effort required to perform quality control on the data prior to submission. Delays in data assembly can negatively affect the timely production of scientific indicators that are dependent upon these datasets, including products such as GLODAP. What if data submission, metadata assembly and quality control can all be rolled into a single application? To support more streamlined data management processes in the NOAA Ocean Acidification Program (OAP) we are developing such an application.This application has the potential for application towards a broader community.

This application addresses the need that data contributing to analysis and synthesis products are high quality, well documented, and accessible from the applications scientists prefer to use. The Scientific Data Integration System (SDIS) application developed by the PMEL Science Data Integration Group, allows scientists to submit their data in a number of formats. Submitted data are checked for common errors. Metadata are extracted from the data that can then be complemented with a complete metadata record using the integrated metadata entry tool that collects rich metadata that meets the Carbon science community requirements. Still being developed, quality control for standard biogeochemical parameters will be integrated into the application. The quality control routines will be implemented in close collaboration with colleagues from the Bjerknes Climate Data Centre (BCDC) within the Bjerknes Centre for Climate Research (BCCR).  This presentation will highlight the capabilities that are now available as well as the implementation of the archive automation workflow, and it’s potential use in support of GLODAP data assembly efforts.

How to cite: Burger, E., Pfeil, B., O'Brien, K., Kamb, L., Jones, S., and Smith, K.: Streamlining Oceanic Biogeochemical Dataset Assembly in Support of Global Data Products, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11709, https://doi.org/10.5194/egusphere-egu2020-11709, 2020.

D816 |
EGU2020-20966
Bente Bye, Elnaz Neinavaz, Alaitz Zabala, Joan Maso, Marie-Francoise Voidrot, Barth De Lathouwer, Nuno Catarino, Pedro Gonzalves, Michelle Cortes, Koushik Panda, Julian Meyer-Arnek, and Bram Janssen

The geosciences communities share common challenges related to effective use of the vast and growing amount of data as well as the continueous development of new technology. It is therefore a great potential in learning from the experiences and knowledge aquired across the various fields. The H2020 project NextGEOSS is building a European data hub and platform to support the Earth observation communities with a set of tools and services through the platform. The suite of tools on the platform alllows scalablitly, interoperability and transparency in a flexible way, well suited to serve a multifaceted interdisciplinary community, NextGEOSS is developed with and for multiple communities, the NextGEOSS pilots. This has resulted and continues to provide transfer of experience and knowledge along the whole value chain from data provision to applications and services based on multiple sources of data. We will introduce the NextGEOSS data hub and platform and show some illustrative examples of the exchange of knowledge that facilitates faster uptake of data and advances in use of new technology. An onboarding system is benefitting for existing and new users. A capacity building strategy is an integral part of both the onboarding and the individual services, which will be highligthed in this presentation.

How to cite: Bye, B., Neinavaz, E., Zabala, A., Maso, J., Voidrot, M.-F., De Lathouwer, B., Catarino, N., Gonzalves, P., Cortes, M., Panda, K., Meyer-Arnek, J., and Janssen, B.: NextGEOSS data hub and platform - connecting data providers with geosciences communities, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20966, https://doi.org/10.5194/egusphere-egu2020-20966, 2020.

D817 |
EGU2020-12386
| Highlight
Meredith Richardson, Ed Kearns, and Jonathan O'Neil

Through satellites, ships, radars, and weather models, the National Oceanic and Atmospheric Administration (NOAA) generates and handles tens of terabytes of data per day. Many of NOAA’s key datasets have been made available to the public through partnerships with Google, Microsoft, Amazon Web Services, and more as part of the Big Data Project (BDP). This movement of data to the Cloud has enabled access for researchers from all over the world to vast amounts of NOAA data, initiating a new form of federal data management as well as exposing key challenges for the future of open-access data. NOAA researchers have run into challenges of providing “analysis-ready” datasets to which researchers from varying fields can easily access, manipulate, and use for different purposes. This issue arises as there is no agreed-upon format or method of transforming traditional datasets for the cloud across research communities, with each scientific field or start up expressing differing data formatting needs (cloud-optimized, cloud-native, etc.). Some possible solutions involve changing data formats into those widely-used throughout the visualization community, such as Cloud-Optimized GeoTIFF. Initial findings have led NOAA to facilitate roundtable discussions with researchers, public and private stakeholders, and other key members of the data community, to encourage the development of best practices for the use of public data on commercial cloud platforms. Overall, by uploading NOAA data to the Cloud, the BDP has led to the recognition and ongoing development of new best practices for data authentication and dissemination and the identification of key areas for targeting collaboration and data use across scientific communities.

How to cite: Richardson, M., Kearns, E., and O'Neil, J.: Data dissemination best practices and challenges identified through NOAA’s Big Data Project, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12386, https://doi.org/10.5194/egusphere-egu2020-12386, 2020.

D818 |
EGU2020-5972
| Highlight
Kaylin Bugbee, Aaron Kaulfus, Aimee Barciauskas, Manil Maskey, Rahul Ramachandran, Dai-Hai Ton That, Chris Lynnes, Katrina Virts, Kel Markert, and Amanda Whitehurst

The scientific method within the Earth sciences is rapidly evolving. Ever increasing volumes require new methods for processing and understanding data while an almost 60 year Earth observation record makes more data-intensive retrospective analyses possible. These new methods of data analysis are made possible by technological innovations and interdisciplinary scientific collaborations. While scientists are beginning to adopt new technologies and collaborations to more effectively conduct data-intensive research, both the data information infrastructure and the supporting data stewardship model have been slow to change. Standard data products are generated at a processing system which are then ingested into local archives. These local archive centers then provide metadata to a centralized repository for search and discovery. Each step in the data process occurs independently and on different siloed components. Similarly, the data stewardship process has a well-established but narrow view of data publication that may be too constrained for an ever-changing data environment. To overcome these obstacles, a new approach is needed for both the data information infrastructure and stewardship models. The data ecosystem approach offers a solution to these challenges by placing an emphasis on the relationships between data, technologies and people. In this presentation, we present the Joint ESA-NASA Multi-Mission Algorithm and Analysis Platform’s (MAAP) data system as a forward-looking ecosystem solution. We will present the components needed to support the MAAP data ecosystem along with the key capabilities the MAAP data ecosystem supports. These capabilities include the ability for users to share data and software within the MAAP, the creation of analysis optimized data services, and the creation of an aggregated catalog for data discovery. We will also explore our data stewardship efforts within this new type of data system which includes developing a data management plan and a level of service plan.

How to cite: Bugbee, K., Kaulfus, A., Barciauskas, A., Maskey, M., Ramachandran, R., Ton That, D.-H., Lynnes, C., Virts, K., Markert, K., and Whitehurst, A.: Data Systems to Enable Open Science: The Joint ESA-NASA Multi-Mission Algorithm and Analysis Platform’s Data Ecosystem, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5972, https://doi.org/10.5194/egusphere-egu2020-5972, 2020.

D819 |
EGU2020-4616
| Highlight
Martina Stockhause, Mark Greenslade, David Hassell, and Charlotte Pascoe

Climate change data and information is among those of the highest interest for cross-domain researchers, policy makers and the general public. Serving climate projection data to these diverse users requires detailed and accessible documentation.

Thus, the CMIP6 (Coupled Model Intercomparison Project Phase 6) data infrastructure consists not only of the ESGF (Earth System Grid Federation) as the data dissemination component but additionally of ES-DOC (Earth System Documentation) and the Citation Service for describing the provenance of the data. These services provide further information on the data creation process (experiments, models, …) and data reuse (data references and licenses) and connect the data to other external resources like research papers.

The contribution will present documentation of the climate change workflow around the furtherInfoURL page serving as an entry point. The challenges are to collect quality-controlled information from the international research community in different infrastructure components and to display them seamlessly alongside on the furtherInfoURL page.

 

References / Links:

  • CMIP6: https://pcmdi.llnl.gov/CMIP6/
  • ES-DOC: https://es-doc.org/
  • Citation Service: http://cmip6cite.wdc-climate.de

How to cite: Stockhause, M., Greenslade, M., Hassell, D., and Pascoe, C.: Documentation of climate change data supporting cross-domain data reuse, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4616, https://doi.org/10.5194/egusphere-egu2020-4616, 2020.

D820 |
EGU2020-8638
Michael Finkel, Albrecht Baur, Tobias K.D. Weber, Karsten Osenbrück, Hermann Rügner, Carsten Leven, Marc Schwientek, Johanna Schlögl, Ulrich Hahn, Thilo Streck, Olaf A. Cirpka, Thomas Walter, and Peter Grathwohl

The consistent management of research data is crucial for the success of long-term and large-scale collaborative research. Research data management is the basis for efficiency, continuity, and quality of the research, as well as for maximum impact and outreach, including the long-term publication of data and their accessibility. Both funding agencies and publishers increasingly require this long term and open access to research data. Joint environmental studies typically take place in a fragmented research landscape of diverse disciplines; researchers involved typically show a variety of attitudes towards and previous experiences with common data policies, and the extensive variety of data types in interdisciplinary research poses particular challenges for collaborative data management.We present organizational measures, data and metadata management concepts, and technical solutions to form a flexible research data management framework that allows for efficiently sharing the full range of data and metadata among all researchers of the project, and smooth publishing of selected data and data streams to publicly accessible sites. The concept is built upon data type-specific and hierarchical metadata using a common taxonomy agreed upon by all researchers of the project. The framework’s concept has been developed along the needs and demands of the scientists involved, and aims to minimize their effort in data management, which we illustrate from the researchers’ perspective describing their typical workflow from the generation and preparation of data and metadata to the long-term preservation of data including their metadata.

How to cite: Finkel, M., Baur, A., Weber, T. K. D., Osenbrück, K., Rügner, H., Leven, C., Schwientek, M., Schlögl, J., Hahn, U., Streck, T., Cirpka, O. A., Walter, T., and Grathwohl, P.: Managing collaborative research data for integrated, interdisciplinary environmental research, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8638, https://doi.org/10.5194/egusphere-egu2020-8638, 2020.

D821 |
EGU2020-7115
| Highlight
Laia Comas-Bru and Marcus Schmidt

Data Management can be overwhelming, especially for Early Career Scientists. In order to give them a kick-start, the World Data System (WDS) organised a 3-day EGU-sponsored workshop on current achievements and future challenges in November 2019 in Paris. The purpose of the workshop was to provide Early Career Scientists with practical skills in data curation and management through a combination of practical sessions, group discussions and lectures. Participants were introduced to what are research data andcommon vocabulary to be used during the workshop. Later, a World Café session provided an opportunity to discuss individual challenges on data management and expectations of the workshop in small groups of peers. Lectures and discussions evolved around Open Science, Data Management Plans (DMP), data exchange, copyright and plagiarism, the use of Big Data, ontologies and cloud platforms in Science. Finally, the roles and responsibilities of the WDS as well as its WDS Early Career Researcher Network were discussed. Wrapping-up the workshop, attendees were walked through what is a data repository and how do they obtain their certifications.This PICO presentation given by two attendees of the workshop will showcase the main topics of discussion on data management and curation, provide key examples with special emphasis on the importance of creating a DMP at an early stage of your research project and share practical tools and advise on how to make data management more accessible.

How to cite: Comas-Bru, L. and Schmidt, M.: Data Management for Early Career Scientists – How to Tame the Elephant , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7115, https://doi.org/10.5194/egusphere-egu2020-7115, 2020.

D822 |
EGU2020-22052
Peter Baumann and the Peter Baumann

Collaboration requires some minimum of common understanding, in the case of Earth data in particular common principles making data interchangeable, comparable, and combinable. Open standards help here; in case of Big Earth Data specifically the OGC/ISO Coverages standard. This unifying framework establishes  a common framework for regular and irregular grids, point clouds, and meshes., in particular: for spatio-temporal datacubes. Services grounding on such common understanding can be more uniform to access and handle, thereby implementing a principle of "minimal surprise" for users visiting different portals. Further, data combination and fusion benefits from canonical metadata allowing alignmen, e.g, between 2D DEMs, 3D satellite image timeseries, 4D atmospheric data.

The EarthServer federation is an open data center network offering dozens of Petabytes of a critical variety, such as radar and optical Copernicus data, atmospheric data, elevation data, and thematic cubes like global sea ice. Data centers like DIASs and CODE-DE, research organizations, companies, and agencies have teamed up in EarthServer. Strictly based on OGC standards, an ecosystem of data has been established that is available to users as a single pool, in particular for efficient distributed data fusion irrespective of data location.

The underlying datacube engine, rasdaman, enables location-transparent federation: clients can submit queries to any node, regardless of where data sit. Query evaluation is optimized automatically, including multi-data fusion of data residing on different nodes. Hence, users perceive one single, common information space. Thanks to the open standards, a broad spectrum of open-source and proprietary clients can utilize this federation, such ranging from OpenLayers and NASA WorldWind over QGIS and ArcGIS to python and R.

In our talk we present technology, services, and governance of this unique intercontinental line-up of data centers. A demo will show distributed datacube fusion live.

How to cite: Baumann, P. and the Peter Baumann: Towards Seamless Planetary-Scale Services , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22052, https://doi.org/10.5194/egusphere-egu2020-22052, 2020.

D823 |
EGU2020-18428
Eleanor A Ainscoe, Barbara Hofmann, Felipe Colon, Iacopo Ferrario, Quillon Harpham, Samuel JW James, Darren Lumbroso, Sajni Malde, Francesca Moschini, and Gina Tsarouchi

The current increase in the volume and quality of Earth Observation (EO) data being collected by satellites offers the potential to contribute to applications across a wide range of scientific domains. It is well established that there are correlations between characteristics that can be derived from EO satellite data, such as land surface temperature or land cover, and the incidence of some diseases. Thanks to the reliable frequent acquisition and rapid distribution of EO data it is now possible for this field to progress from using EO in retrospective analyses of historical disease case counts to using it in operational forecasting systems.

However, bringing together EO-based and non-EO-based datasets, as is required for disease forecasting and many other fields, requires carefully designed data selection, formatting and integration processes. Similarly, it requires careful communication between collaborators to ensure that the priorities of that design process match the requirements of the application.

Here we will present work from the D-MOSS (Dengue forecasting MOdel Satellite-based System) project. D-MOSS is a dengue fever early warning system for South and South East Asia that will allow public health authorities to identify areas at high risk of disease epidemics before an outbreak occurs in order to target resources to reduce spreading of epidemics and improve disease control. The D-MOSS system uses EO, meteorological and seasonal weather forecast data, combined with disease statistics and static layers such as land cover, as the inputs into a dengue fever model and a water availability model. Water availability directly impacts dengue epidemics due to the provision of mosquito breeding sites. The datasets are regularly updated with the latest data and run through the models to produce a new monthly forecast. For this we have designed a system to reliably feed standardised data to the models. The project has involved a close collaboration between remote sensing scientists, geospatial scientists, hydrologists and disease modelling experts. We will discuss our approach to the selection of data sources, data source quality assessment, and design of a processing and ingestion system to produce analysis-ready data for input to the disease and water availability models.

How to cite: Ainscoe, E. A., Hofmann, B., Colon, F., Ferrario, I., Harpham, Q., James, S. J., Lumbroso, D., Malde, S., Moschini, F., and Tsarouchi, G.: Selection and integration of Earth Observation-based data for an operational disease forecasting system, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18428, https://doi.org/10.5194/egusphere-egu2020-18428, 2020.

D824 |
EGU2020-8937
Manuela Köllner, Mayumi Wilms, Anne-Christin Schulz, Martin Moritz, Katrin Latarius, Holger Klein, Kai Herklotz, and Kerstin Jochumsen

Reliable data are the basis for successful research and scientific publishing. Open data policies assure the availability of publicly financed field measurements to the public, thus to all interested scientists. However, the variety of data sources and the availability or lack of detailed metadata cause a huge effort for each scientist to decide if the data are usable for their own research topic or not. Data end-user communities have different requirements in metadata details and data handling during data processing. For data providing institutes or agencies, these needs are essential to know, if they want to reach a wide range of end-user communities.

The Federal Maritime and Hydrographic Agency (BSH, Bundesamt für Seeschifffahrt und Hydrographie, Hamburg, Germany) is collecting a large variety of field data in physical and chemical oceanography, regionally focused on the North Sea, Baltic Sea, and North Atlantic. Data types vary from vertical profiles, time-series, underway measurements as well as real-time or delayed-mode from moored or ship-based instruments. Along other oceanographic data, the BSH provides all physical data via the German Oceanographic Data Center (DOD). It is crucial to aim for a maximum in reliability of the published data to enhance the usage especially in the scientific community.

Here, we present our newly established data processing and quality control procedures using agile project management and workflow techniques, and outline their implementation into metadata and accompanied documentation. To enhance the transparency of data quality control, we will apply a detailed quality flag along with the common data quality flag. This detailed quality flag, established by Mayumi Wilms within the research project RAVE Offshore service (research at alpha ventus) enables data end-users to review the result of several individual quality control checks done during processing and thus to identify easily if the data are usable for their research.

How to cite: Köllner, M., Wilms, M., Schulz, A.-C., Moritz, M., Latarius, K., Klein, H., Herklotz, K., and Jochumsen, K.: Realizing Maximum Transparency of Oceanographic Data Processing and Data Quality Control for Different End-User Communities, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8937, https://doi.org/10.5194/egusphere-egu2020-8937, 2020.

D825 |
EGU2020-21975
| Highlight
Gerardo Bruque and Olvido Tello

In Europe, the Marine Strategy Framework Directive (MSFD) seeks to achieve a good environmental status of European marine waters and protect the resource base on which economic and social activities related to the sea depend. With this legislative tool the European Parliament recognizes the vital importance of the management of human activities that have an impact on the marine environment, integrating the concepts of environmental protection and sustainable use.
MSFD establishes a monitoring program of different descriptors for continuous evaluation and periodic updating of the objectives. In Spain, the Ministry of Ecological Transition (MITECO) is responsible and coordinator of carrying out the MSFD, but it is the Spanish Institute of Oceanography (IEO) that performs the research and study of the different indicators and therefore the tasks of collecting oceanographic data.
The Geographic Information Systems Unit of the IEO is responsible for storing, debugging and standardizing this data by including them in the IEO Spatial Data Infrastructure (IDEO). IDEO has useful and advanced tools to discover and manage the oceanographic, spatial or non-spatial data that the IEO manages. To facilitate access to IDEO, the IEO Geoportal was developed, which essentially contains a catalog of metadata and access to different IEO web services and data viewers.
Some examples of priority dataset for the MSFD are: Species and Habitat distribution, commercially-exploited fish and shellfish species distribution, Nutrients, Chlorophyll a, dissolved oxygen, spatial extent of loss of seabed, Contaminants, litter, noise, etc.
The correct preparation and harmonization of the mentioned data sets following the Implementing Rules adopted by the INSPIRE Directive is essential to ensure that the different Spatial Data Infrastructures (SDI) of the member states are compatible and interoperable in the community context.
The INSPIRE Directive was born with the purpose of making relevant, concerted and quality geographic information available in a way that allows the formulation, implementation, monitoring and evaluation of the impact or territorial dimension policies of the European Union.
The geographic data sets, together with their corresponding metadata, constitute the cartographic base on which the information collected for the update of the continuous evaluation of the different descriptors of the MSFD is structured.
Thus, although these datasets are intended for use by public institutions responsible for decision-making on the management of the marine environment, they can also be very useful for a wide range of stakeholders and reused for multiple purposes.
With all this in mind, the INSPIRE Directive is extremely interesting and essential for the tasks required for the MSFD. As with work on our projects related to the Marine Space Planning Directive (MSP).

How to cite: Bruque, G. and Tello, O.: Managing oceanographic data for the Marine Strategy Framework Directive in Spain supported by the Spatial Data Infrastructure of the Spanish Institute of Oceanography (IEO) and the INSPIRE Directive, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21975, https://doi.org/10.5194/egusphere-egu2020-21975, 2020.

D826 |
EGU2020-9686
Regina Kwee, Tobias Weigel, Hannes Thiemann, Karsten Peters, Sandro Fiore, and Donatello Elia

This contribution highlights the Python xarray technique in context of a climate specific application (typical formats are NetCDF, GRIB and HDF).

We will see how to use in-file metadata and why they are so powerful for data analysis, in particular by looking at community specific problems, e.g. one can select purely on coordinate variable names. ECAS, the ENES Climate Analytics Service available at Deutsches Klimarechenzentrum (DKRZ), will help by enabling faster access to the high-volume simulation data output from climate modeling experiments. In this respect, we can also make use of “dask” which was developed for parallel computing and can smoothly work with xarray. This is extremely useful when we want to exploit fully the advantages of our supercomputer.

Our fully integrated service offers an interface via Jupyter notebooks (ecaslab.dkrz.de). We provide an analysis environment without the need of costly transfers, accessing CF standardized data files and all accessible via the ESGF portal on our nodes (esgf-data.dkrz.de). We can analyse the data of e.g. CMIP5, CMIP6, Grand Ensemble and observation data. ECAS was developed in the frame of European Open Source Cloud (EOSC) hub.

How to cite: Kwee, R., Weigel, T., Thiemann, H., Peters, K., Fiore, S., and Elia, D.: Python-based Multidimensional and Parallel Climate Model Data Analysis in ECAS, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9686, https://doi.org/10.5194/egusphere-egu2020-9686, 2020.

D827 |
EGU2020-15961
Brenner Silva, Najmeh Kaffashzadeh, Erik Nixdorf, Sebastian Immoor, Philipp Fischer, Norbert Anselm, Peter Gerchow, Angela Schäfer, and Roland Koppe and the Computing and data center

The O2A (Observation to Archive) is a data-flow framework for heterogeneous sources, including multiple institutions and scales of Earth observation. In the O2A, once data transmission is set up, processes are executed to automatically ingest (i.e. collect and harmonize) and quality control data in near real-time. We consider a web-based sensor description application to support transmission and harmonization of observational time-series data. We also consider a product-oriented quality control, where a standardized and scalable approach should integrate the diversity of sensors connected to the framework. A review of literature and observation networks of marine and terrestrial environments is under construction to allow us, for example, to characterize quality tests in use for generic and specific applications. In addition, we use a standardized quality flag scheme to support both user and technical levels of information. In our outlook, a quality score should pair the quality flag to indicate the overall plausibility of each individual data value or to measure the flagging uncertainty. In this work, we present concepts under development and give insights into the data ingest and quality control currently operating within the O2A framework.

How to cite: Silva, B., Kaffashzadeh, N., Nixdorf, E., Immoor, S., Fischer, P., Anselm, N., Gerchow, P., Schäfer, A., and Koppe, R. and the Computing and data center: Automatic quality control and quality control schema in the Observation to Archive, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15961, https://doi.org/10.5194/egusphere-egu2020-15961, 2020.

D828 |
EGU2020-20814
Yanchen Bo

High-level satellite remote sensing products of Earth surface play an irreplaceable role in global climate change, hydrological cycle modeling and water resources management, environment monitoring and assessment. Earth surface high-level remote sensing products released by NASA, ESA and other agencies are routinely derived from any single remote sensor. Due to the cloud contamination and limitations of retrieval algorithms, the remote sensing products derived from single remote senor are suspected to the incompleteness, low accuracy and less consistency in space and time. Some land surface remote sensing products, such as soil moisture products derived from passive microwave remote sensing data have too coarse spatial resolution to be applied at local scale. Fusion and downscaling is an effective way of improving the quality of satellite remote sensing products.

We developed a Bayesian spatio-temporal geostatistics-based framework for multiple remote sensing products fusion and downscaling. Compared to the existing methods, the presented method has 2 major advantages. The first is that the method was developed in the Bayesian paradigm, so the uncertainties of the multiple remote sensing products being fused or downscaled could be quantified and explicitly expressed in the fusion and downscaling algorithms. The second advantage is that the spatio-temporal autocorrelation is exploited in the fusion approach so that more complete products could be produced by geostatistical estimation.

This method has been applied to the fusion of multiple satellite AOD products, multiple satellite SST products, multiple satellite LST products and downscaling of 25 km spatial resolution soil moisture products. The results were evaluated in both spatio-temporal completeness and accuracy.

How to cite: Bo, Y.: Bayesian Spatio-temporal Geostatistics-based Method for Multiple Satellite Products Fusion and Downscaling, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20814, https://doi.org/10.5194/egusphere-egu2020-20814, 2020.

D829 |
EGU2020-8973
Alessandro Sozza, Massimo Cencini, Leonardo Parisi, Marco Acernese, Fabio Santoni, Fabrizio Piergentili, Stefania Melillo, and Andrea Cavagna

The monitoring of space debris and satellites orbiting around Earth is an essential topic in the space surveillance. The impact of debris, even of small size, against active spatial installations causes serious damage, malfunctions and potential service interruptions. Collision-avoidance maneuverings are often performed but they require increasingly complex protocols. Density of space debris is now so high that even astronomical observations are often degraded by it. Although it does not affect space weather, it may interfere with weather satellites.
We have developed an innovative experimental technique based on stereometry at intercontinental scale to obtain simultaneous images from two optic observatories, installed in Rome (at the Urbe Airport and in Collepardo on the Apennines) and in Malindi (Kenya). From the observations on Earth, it's possible to reconstruct the three-dimensional position and velocity of the objects. The distance between the two observatories is crucial for an accurate reconstruction. In particular, we have considered the sites of Urbe and Collepardo, with a baseline of 80 km, to detected Low-Earth orbits (LEO), while we have considered a baseline of 6000 km, between Urbe and Malindi, to observe geostationary orbits (GEO).
We will present the validation of the three-dimensional reconstruction method via a fully synthetic procedure that propagate the satellite trajectory, using SGP4 model and TLEs data (provided by NASA), and generate synthetic photographs of the satellite passage from the two observatories. Then we will compare the synthetic results with the experimental results obtained using real optic systems. The procedure can be automatized to identify unknown space objects and even generalized for an arbitrary number of sites of observation. The identified debris will be added to the catalogue DISCOS (Database and Information System Characterizing Objects in Space) owned by the European Space Agency (ESA) to improve the space surveillance and the ability to intervene in the case of potential risks. 

How to cite: Sozza, A., Cencini, M., Parisi, L., Acernese, M., Santoni, F., Piergentili, F., Melillo, S., and Cavagna, A.: Space debris monitoring based on inter-continental stereoscopic detections, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8973, https://doi.org/10.5194/egusphere-egu2020-8973, 2020.

D830 |
EGU2020-12012
Chengyi Li

For the country and human society, it is a very important and meaningful work to make the mines mining controlled and rationally. Otherwise, illegal mining and unreasonable abandonment will cause waste and loss of resources. With the features of convenient, cheap, and instantaneous, remote sensing technology makes it possible to automatic monitoring the mines mining in large-scale.

We proposed a mine mining change detection framework based on multitemporal remote sensing images. In this framework, the status of mine mining is divided into mining in progress and stopped mining. Based on the multitemporal GF-2 satellite data and the mines mining data from Beijing, China, we have built a mines mining change dataset(BJMMC dataset), which includes two types, from mining to mining, and from mining to discontinued mining. And then we implement a new type of semantic change detection based on convolutional neural networks (CNNs), which involves intuitively inserting semantics into the detected change regions.

We applied our method to the mining monitoring of the Beijing area in another year, and combined with GIS data and field work, the results show that our proposed monitoring method has outstanding performance on the BJMMC dataset.

How to cite: Li, C.: Automatic Monitoring of Mines Mining based on Multitemporal Remote Sensing Image Change Detection, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12012, https://doi.org/10.5194/egusphere-egu2020-12012, 2020.

D831 |
EGU2020-16632
Elnaz Neinavaz, Andrew K. Skidmore, and Roshanak Darvishzadeh

Precise estimation of land surface emissivity (LSE) is essential to predict land surface energy budgets and land surface temperature, as LSE is an indicator of material composition. There exist several approaches to LSE estimation employing remote sensing data; however, the prediction of LSE remains a challenging task. Among the existing approaches for calculating LSE, the NDVI threshold method appears to hold well over vegetated areas. To apply the NDVI threshold method, it is necessary to know the proportion of vegetation cover (Pv). This research aims to investigate the impact of Pv's prediction accuracy on the estimation of LSE over the forest ecosystem. In this regard, a field campaign coinciding with a Landsat-8 overpass was undertaken for the mixed temperate forest of the Bavarian Forest National Park, in southeastern Germany. The Pv in situ measurements were made for 37 plots. Four vegetation indices, namely NDVI, variable atmospherically resistant index, wide dynamic range vegetation index, and three-band gradient difference vegetation index, were applied to predict Pv for further use in LSE computing. Unlike previous studies that suggested variable atmospherically resistant index can be estimated Pv with higher prediction accuracy compared to NDVI over the agricultural area, our results showed that the prediction accuracy of Pv is not different when using NDVI over the forest (R2CV = 0.42, RMSECV = 0.06). Pv was measured with the lowest accuracy using the wide dynamic range vegetation index (R2CV = 0.014, RMSECV = 0.197) and three-band gradient difference vegetation index (R2CV = 0.032, RMSECV = 0.018).  The results of this study also revealed that the variation in the prediction accuracy of the Pv has an impact on the results of LSE calculation.

How to cite: Neinavaz, E., Skidmore, A. K., and Darvishzadeh, R.: Estimation of Vegetation Proportion Cover to Improve Land Surface Emissivity, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16632, https://doi.org/10.5194/egusphere-egu2020-16632, 2020.