Data science, analytics and visualization technologies and methods emerge as significant capabilities for extracting insight from the ever growing volume and complexity of scientific data. The rapid advancement of these capabilities no doubt helps address a number of challenges and present new opportunities in improving Earth and Space science data usability. This session will highlight and discuss the novelty and strength of these emerging fields and technologies of these components, and their trends. We invite papers and presentations to examine and share the experience of:
- What benefits they offer to Earth and Space Science
- What science research challenges they address
- How they help transform science data into information and knowledge
- In what ways they can advance scientific research
- What lessons were learned in the development and infusion of these methods and technologies
Files for download
Chat time: Friday, 8 May 2020, 14:00–15:45
The Methane Source Finder is a web-based data portal developed under NASA’s CMS and ACCESS programs for exploring methane data in the state of California. This open access interactive map allows users to discover, analyze, and download data across a range of spatial scales derived from remote-sensing, surface monitoring, and bottom-up infrastructure information. This includes methane plume images and associated emission estimates derived from the 2016-2018 California Methane Survey using the airborne imaging spectrometer AVIRIS-NG. The fine spatial resolution (typically 3 m) AVIRIS-NG products when combined with the Vista infrastructure database of over 270,000 components statewide permits direct attribution of emissions to individual point source locations. These point source products have benefited from evaluation and feedback from state and local agencies and private sector companies and in some cases were used to directly guide leak detection and repair efforts. Additional data layers at local and regional scales provide context for point source emissions. These include methane flux inversions for the Los Angeles basin derived from surface observations and tracer transport modeling (3 km, 4 day resolution) as well as the CMS US methane gridded inventory (10 km, monthly resolution) over the state of California.
How to cite: Thorpe, A., Duren, R., Tapella, R., Bue, B., Foster, K., Yadav, V., Rafiq, T., Hopkins, F., Gill, K., Rodriguez, J., Plave, A., Cusworth, D., and Miller, C.: Methane Source Finder: A web-based data portal for exploring methane data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9923, https://doi.org/10.5194/egusphere-egu2020-9923, 2020.
Because of the limited coverage of receiver stations, current measurements of Total Electron Content (TEC) by ground-based GNSS receivers are not complete with large portions of data gaps. The processing to obtain complete TEC maps for space science research is time consuming and needs the collaboration of five International GNSS Service (IGS) Ionosphere Associate Analysis Centers (IAACs) to use different data processing and filling algorithms and to consolidate their results into final IGS completed TEC maps. In this work, we developed a Deep Convolutional Generative Adversarial Network (DCGAN) and Poisson blending model (DCGAN-PB) to learn IGS completion process for automatic completion of TEC maps. Using 10-fold cross validation of 20-year IGS TEC data, DCGAN-PB achieves the average root mean squared error (RMSE) about 4 absolute TEC units (TECu) of the high solar activity years and around 2 TECu for low solar activity years, which is about 50% reduction of RMSE for recovered TEC values compared to two conventional single-image inpainting methods. The developed DCGAN-PB model can lead to an efficient automatic completion tool for TEC maps.
How to cite: Jin, M., Pan, Y., Zhang, S., and Deng, Y.: Development and investigation of a deep learning based method for TEC map completion, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11541, https://doi.org/10.5194/egusphere-egu2020-11541, 2020.
Three selective, global – scale areas for modern’s reef sites have been selected on exploring the automatic metrics extraction for further studies among the relationships between reef morphology and the surrounding oceanographic conditions that can potentially be used as an input parameter for training images for Multiple Point Statistics simulations.
Obtain geometric features from satellite images is a very laborious task and hard work when making manually. Nevertheless, automatic geometric features detection is a challenging problem due to the varying lighting, orientation, and background of the target object, especially when analyzing raw images in RGB format. In this work, a robust algorithm programmed in python is presented with the purpose of estimate automatically the geometric properties of a set of coral reef islands located in South East Asia. First, a python code load massive satellite imagery from a specific folder RGB format, then each raw coral reef island image is resized, converted from RGB band to gray, smoothed and binarized using Open Computer Vision Library available in python 3. The island edge information contains very prominent geometric attributes that characterize their behavior, thus morphological transformations were applied to define the contour of the island. Moreover, a structural analysis and shape descriptors were made in a set of images in order to numerically calculate the characteristics of the island. Finally, a total of 27 satellite images were processed by the algorithm successfully, only two images were not segmented correctly because the illumination and intensity of the predominant colors, especially blue, were different from the rest of the images. This dataset was exported from python to Microsoft Excel spreadsheets and CSV format.
How to cite: Jimenez Soto, G., Arshad Beg, M., Poppelreiter, M. C., and Rahmatsyah, K.: Automatic extraction of modern reefs satellite images geometries using Computer Vision, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13893, https://doi.org/10.5194/egusphere-egu2020-13893, 2020.
Operational Earth Observation (EO) satellite missions are entering their 5th lifetime decade, and the need to access historical data has strongly increased, particularly for long-term science and environmental monitoring applications. This trend that drives users to request long time-series of data will increase even more in the future, in particular regarding the interest on global change assessment and monitoring to support policy makers decisions on atmosphere, ocean, cryosphere, carbon and other biogeochemical cycles safeguard.
The Copernicus initiative (https://www.copernicus.eu) is playing a unique and unprecedented role form the point of view of amount, relevance and quality of provided environmental data. In the frame of the European Commission funded activities, the Data and Information Access Service (DIAS) are operated by five different consortia to acquire, process, archive and distribute data from Copernicus and Third-Party Missions.
With this enormous availability of past, present, and future geospatial environmental data, there is the need to make users able to identify the datasets that best fit with their needs and obtain these data in fastest and easiest-to-use possible way. The Advanced geospatial DAta Management - ADAM platform (https://adamplatform.eu/) provides discovery, access, processing and visualization services for data in the distributed cloud environment, significantly reducing the burden of data usability.
ADAM allows the exploitation of the content of EO data archives extended from a few years to decades and therefore makes their continuously increasing scientific value fully accessible. The advances in satellite sensor characteristics (spatial resolution, temporal frequency, spectral sensors) as well as in all related technical aspects (data and metadata format, storage, infrastructures) underline the strong need to preserve the EO space data without time constraints and to keep them accessible and exploitable, as they constitute a humankind asset. This is a typical big data challenge that ADAM can face.
This paper describes the ADAM platform and various application domains supported with its data science analytics and visualization capabilities.
How to cite: Mantovani, S., Natali, S., Folegani, M., Cavicchi, M., Barboni, D., and Ferraresi, S.: The ADAM platform, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17707, https://doi.org/10.5194/egusphere-egu2020-17707, 2020.
The implementation of Intellectualized Analysis Meteorological Platform is based on the Meteorology Open Application Platform (MOAP3.0),which developed by the National Meteorological Centre of China Meteorological Administration. This visualization analysis network platform integrates the main characteristics of statistical analysis, intelligent interaction, and rendering method and uses decoupling development mode. Its Web-Server deployed on distributed cloud framework in a distributed environment, which can support real-time analysis, interactive analysis and visualization analysis of massive meteorological data. The data are including national meteorological observation data, national guidance forecast data (0.05° x 0.05°), the area of forecast, MICAPS data, etc. It has been put into operation since December 2019. According to the results of continual tests, it indicates that the entire system is quite stable, reliable, with second-grade responding time in data transmission. The whole system adopts the design of "one key linkage" and " drilling-down analysis of a temporal and spatial context step by step " , finalizes three main home pages of "disaster analysis", "meteorological big data for living analysis" and "station’s climate background analysis", involves in 36 standard interfaces ,and sets up 21 independent functional modules. In the spatial dimension, the cascading of six spatial levels of meteorological data is followed by observing data, grid data, urban, river basins, regional meteorological centers and national meteorological centers. In the time dimension, the linkage analysis of minute, hour, daily, ten-day periods, monthly value, and annual value completes the full time chain. Theoretically, the integration analysis of history- reality- forecast in China is realized basing on the whole station climate background and a relatively well-developed analysis system of meteorological spatial-temporal data mining. To be specific, the basic meteorological algorithms including in the background of the system are regional average value, precipitation days of different magnitude, historical extremes of single meteorological element, spatial interpolation, fall area analysis, etc. The web-visualization functions contain the online rendering of weather map, spatial-temporal integration display of multiple meteorological elements, color scale classification and filtering, etc. At the same time, in order to solve the problems of dense site in daily operation, the strategies of site thinning and hierarchical rendering are used to optimize. In conclusion, the whole system takes the standardized tile-type electronic map in China as the carrier, displays and interacts the massive meteorological data of various dimensions and types, and finally carries out a complete real-time sequence analysis product, which will be playing an important role in the practical application of forecasting and early warning system of Chinese meteorological fields.
How to cite: Song, W. and Zhengguang, H.: Meteorology Open Application Platform (MOAP3.0) -- the Application and Implementation of Brand-new Intelligent Analysis Meteorological Platform, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22386, https://doi.org/10.5194/egusphere-egu2020-22386, 2020.
Currently, a myriad of geospatial technologies, geovisual techniques and data sources are available on the market both for data collection and geovisualization; from drones, LiDAR, multispectral satellite imagery, “big data”, 360-degree cameras, smartphones, smartwatches, web-based mobile maps to virtual reality and augmented reality. These technologies are becoming progressively easy to use due to improved computing power and accessible application programming interfaces. These advances combined with dropping prices in these technologies mean that there are increasing opportunities to collect more data from heterogeneous populations as well as communicating ideas to them. This offers seemingly limitless opportunities for anyone collecting and disseminating geospatial data. When data are aggregated and processed, it becomes information. To communicate this information effectively and efficiently geovisualizations can be utilized. The aim of geovisualization is to interactively reveal spatial patterns that may otherwise go unnoticed. Much excitement surrounds each of these geospatial technologies which offer increased opportunities to communicate geospatial phenomena in a stimulating manner through various geovisualization techniques and interfaces. The challenge is that it also takes very little effort to make geovisualizations that may be visually attractive but do not communicate anything. With so many accessible geospatial technologies available a common and important question persists: What geospatial technologies and geovisualization techniques are best suited to collect and communicate geospatial data?
The answer to this question will vary based on the phenomena being examined, the geospatial data available and the communication goals. Here I present a taxonomy of geospatial technologies and geovisualization techniques, identifying their strengths and weaknesses for data collection and geospatial information communication. The aim of this taxonomy is to act as a decision support tool, to help researchers make informed decisions about what technologies to incorporate into a research project. With so many different technologies available, what should a researcher consider before they pick which platform to use to communicate important findings? More explicitly, how can specific geospatial technologies help transform scientific data into information and subsequent knowledge?
Included in this taxonomy are data collection tools and cartographic interface tools. This taxonomy is informed by literature from a cross-section of disciplines ranging from cartography, spatial media, communication, geographic scale, spatial cognition, human-computer interaction, and user experience research. These literatures are presented and woven together to synthesize the strengths and weaknesses of different geospatial technologies for data collection/entry and spatial information communication. Additionally, key considerations are presented in an effort to achieve effective communication; meaning identifying intended use with intended users, to best meet communication goals. To illustrate key points, indicator data from the United Nations Sustainable Development Goals are used. The aim here is to offer recommendations on how to best identify and apply appropriate technology for data collection and geovisualization, in an effort to reduce the number of frivolous, confusing, and ugly maps available online.
How to cite: Ricker, B.: Geospatial technologies and tools for data collection and communication: A Taxonomy , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9008, https://doi.org/10.5194/egusphere-egu2020-9008, 2020.
The Sun have a constant action on Earth, interfering in different ways on life in our planet. The physical, chemical and biological processes that occur on Earth are directly influenced by the variation of solar irradiance, which is a function of the activity in the Sun’s different atmospheric layers and their rapid variation. Studying this relationship may require the availability of a large amount of collected data, without significant gaps that could be caused from many kinds of issues. In this work, we present a Recurrent Neural Network as an option for estimating the Total Solar Irradiance (TSI) and the Spectral Solar Irradiance (SSI) variability. Solar images collected on different wave components were preprocessed and used as the input parameters, and TSI and SSI data collected by instruments onboard of SORCE were used as reference of the results we expected to achieve. Complementary to this approach, we opted for developing a reproducible procedure, for which we chose a free programming language, in attempt to offer the same kind of results, with same accuracy, for future studies which would like to reproduce our procedure. To achieve this, reproducible notebooks will be generated with the intention of providing transparency in the data analysis process and allowing the process and the results to be validated, modified and optimized by those who would like to do it. This approach aims to obtain a good accuracy in estimating the TSI and SSI, allowing its reconstruction in gap scales and also the forecast of their values six hours ahead.
How to cite: Muralikrishna, A., Santos, R., and Vieira, L. E.: A reproducible solar irradiance estimation process using Recurrent Neural Network, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1130, https://doi.org/10.5194/egusphere-egu2020-1130, 2020.
Solar flares are often associated with high-intensity radio emission known as `solar radio bursts' (SRBs). SRBs are generally observed in dynamic spectra and have five major spectral classes, labelled type I to type V depending on their shape and extent in frequency and time. Due to their morphological complexity, a challenge in solar radio physics is the automatic detection and classification of such radio bursts. Classification of SRBs has become necessary in recent years due to large data rates (3 Gb/s) generated by advanced radio telescopes such as the Low Frequency Array (LOFAR). Here we test the ability of several supervised machine learning algorithms to automatically classify type II and type III solar radio bursts. We test the detection accuracy of support vector machines (SVM), random forest (RF), as well as an implementation of transfer learning of the Inception and YOLO convolutional neural networks (CNNs). The training data was assembled from type II and III bursts observed by the Radio Solar Telescope Network (RSTN) from 1996 to 2018, supplemented by type II and III radio burst simulations. The CNNs were the best performers, often exceeding >90% accuracy on the validation set, with YOLO having the ability to perform radio burst burst localisation in dynamic spectra. This shows that machine learning algorithms (in particular CNNs) are capable of SRB classification, and we conclude by discussing future plans for the implementation of a CNN in the LOFAR for Space Weather (LOFAR4SW) data-stream pipelines.
How to cite: Carley, E.: Using supervised machine learning to automatically detect type II and III solar radio bursts, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5109, https://doi.org/10.5194/egusphere-egu2020-5109, 2020.
NASA’s Solar System Treks program produces a suite of interactive visualization and AI/data science analysis tools. These tools enable mission planners, planetary scientists, and engineers to access geospatial data products derived from big data returned from a wide range of instruments aboard a variety of past and current missions, for a growing number of planetary bodies.
The portals provide easy-to-use tools for browse, search and the ability to overlay a growing range and large amount of value added data products. Data products can be viewed in 2D and 3D, in VR and can be easily integrated by stacking and blending together rendering optimal visualization. Data sets can be plotted and compared against each other. Standard gaming and 3D mouse controllers allow users to maneuver first-person visualizations of flying across planetary surfaces.
The portals provide a set of advanced analysis tools that employed AI and data science methods. The tools facilitate measurement and study of terrain including distance, height, and depth of surface features. They allow users to perform analyses such as lighting and local hazard assessments including slope, surface roughness and crater/boulder distribution, rockfall distribution, and surface electrostatic potential. These tools faciliate a wide range of activities including the planning, design, development, test and operations associated with lunar sortie missions; robotic (and potentially crewed) operations on the surface; planning tasks in the areas of landing site evaluation and selection; design and placement of landers and other stationary assets; design of rovers and other mobile assets; developing terrain-relative navigation (TRN) capabilities; deorbit/impact site visualization; and assessment and planning of science traverses. Additional tools useful scientific research are under development such as line of sight calculation.
Seven portals are publicly available to explore the Moon, Mars, Vesta, Ceres, Titan, IcyMoons, and Mercury with more portals in development and planning stages.
This presentation will provide an overview of the Solar System Treks and highlight its innovative visualization and analysis capabilities that advance scientific discovery. The information system and science communities are invited to provide suggestions and requests as the development team continues to expand the portals’ tool suite to maximize scientific research.
Lastly, the authors would like to thank the Planetary Science Division of NASA’s Science Mission Directorate, NASA’s SMD Science Engagement and Partnerships, the Advanced Explorations Systems Program of NASA’s Human Exploration Operations Directorate, and the Moons to Mars Mission Directorate for their support and guidance in the development of the Solar System Treks.
How to cite: Law, E. and Day, B. and the Solar System Treks Project Team: Innovative visualization and analysis capabilities to advance scientific discovery, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-127, https://doi.org/10.5194/egusphere-egu2020-127, 2020.
Several recent papers have investigated different challenges in applying machine learning (ML) techniques to Earth science problems. The challenges listed range from interpretability of the results to computational demand to data issues. In this paper, we focus on specific challenges listed in the review papers that are centered around training data, as the size of training data is important in applying deep learning (DL) techniques. We are in the process of conducting a literature survey to better understand these challenges as well as to understand any trends. As part of this survey, our review has encompassed Earth science papers from AGU, AMS, IEEE and SPIE journals covering the last ten years and focused on papers that utilize supervised ML techniques.
Our initial survey results show some interesting findings. The use of supervised machine learning techniques in Earth science research has increased significantly in the last decade. The number of atmospheric science papers (i.e., from AMS journals) using ML approaches has increased by over 40%. Across all of Earth science even larger changes have occurred, including a >90% increase in AGU papers and a >10-fold increase in IEEE papers using ML.
We also conducted a deep dive into all the papers from AGU journals and uncovered interesting findings. There is a prevalence of the use of supervised ML in certain sub-disciplines within Earth science. The biogeoscience and land surface research communities lead in this area: over 20% of papers published in Global Biogeochemical Cycles, JGR Biogeosciences, JGR Earth Surface, and Water Resources Research use supervised ML techniques, including over 35% of the papers in JGR Biogeosciences. The availability of labeled training data in Earth science is reflected in the number of training samples used in supervised analysis. In the papers we surveyed, most ML algorithms were trained using small (i.e. hundreds of labeled) samples. However, for some applications using model output or large, established datasets, the number of training data ranged several orders of magnitude greater.
In this presentation, we will describe our findings from the literature survey. We will also list recommendations for the science community to address the existing challenges around training data.
How to cite: Ramasubramanian, M., Virts, K., Shirey, A., Kumar, A., Hassan, M., Acharya, A., Ramachandran, R., and Manil, M.: Surveying the Machine Learning Landscape in Earth Sciences, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6077, https://doi.org/10.5194/egusphere-egu2020-6077, 2020.
Earth science researchers typically use event (an instance of an Earth science phenomenon) data for case study analysis. However, Earth science data search systems are currently limited to specifying a query parameter that includes the space and time of an event. Such an approach results in researchers spending a considerable amount of time sorting through data to conduct research studies on events. With the growing data volumes, it is imperative to investigate data-driven approaches to address this limitation in the data search system.
We describe several contributions towards alternative ways to accelerate event-based studies from large data archives.
The first contribution is the use of a machine learning-based approach, an enabling data-driven technology, to detect Earth science events from image archives. Specifically, the development of deep learning models to detect various Earth science phenomena is discussed. Deep learning includes machine learning algorithms that consist of multiple layers, where each layer performs feature detection. We leverage recent advancements in deep learning techniques (mostly using convolutional neural networks (CNNs) that have produced state-of-the-art image classification results in many domains.
The second contribution is the development of an event database and a phenomena portal. The phenomena portal utilizes the deep learning detected events cataloged in an event database. The portal provides a user interface with several features including display of events of the day, spatio-temporal characteristics of events, and integration of user feedback.
The third contribution is the development of a cloud-native framework to automate and scale the deep learning models in a production environment.
The paper also discusses the challenges in developing an end-to-end Earth science machine learning project and possible approaches to address those challenges.
How to cite: Maskey, M., Ramachandran, R., Gurung, I., Ramasubramanian, M., Kaulfus, A., Priftis, G., Freitag, B., Bollinger, D., Mestre, R., and da Silva, D.: Earth science phenomena portal: from deep learning-based event detection to visual exploration, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10435, https://doi.org/10.5194/egusphere-egu2020-10435, 2020.
An estimated 70% of the world’s poorest people live in rural spaces. There is a consistent differentiation between rural and urban contexts, where the former are typically characterised by weak infrastructure, limited services and social marginalisation. At the same time, the world’s poorest people are most vulnerable to global change impacts. Historic pathways to measuring and achieving poverty reduction must be adapted for an era of increasingly dynamic change, where spatio-temporal blind spots preclude a comprehensive understanding of poverty and its manifestation in rural developing contexts. To catalyse an effective poverty eradication narrative, we require a characterisation of the spatio-temporal anatomy of poverty metrics. To achieve this, researchers and practitioners must develop tools and mobilise data sources that enable the detection and visualisation of economic and social dimensions of rural spaces at finer temporal and spatial scales than is currently practised. This can only be realised by integrating new technologies and non-traditional sources of data alongside conventional data to engender a novel policy landscape.
Cue Earth Observation: the only medium through which data can be gathered that is global in its coverage but also available across multiple temporal and spatial scales. Earth Observation (EO) data (collected from satellite, airborne and in-situ remote sensors) have a demonstrable capacity to inform, update, situate and provide the necessary context to design evidence-based policy for sustainable development. This is particularly important for the Sustainable Development Goals (SDGs) because the nested indicators are based on data that can be visualised, and many have a definitive geospatial component, which can improve national statistics reporting.
In this review, we present a rubric for integrating EO and geospatial data into rural poverty analysis. This aims to provide a foundation from which researchers at the interface of social-ecological systems can unlock new capabilities for measuring economic, environmental and social conditions at the requisite scales and frequency for poverty reporting and also for broader livelihoods and development research. We review satellite applications and explore the development of EO methodologies for investigating social-ecological conditions as indirect proxies of rural wellbeing. This is nested within the broader sustainable development agenda (in particular the SDGs) and aims to set out what our capabilities are and where research should be focused in the near-term. In short, elucidating to a broad audience what the integration of EO can achieve and how developing social-ecological metrics from EO data can improve evidence-based policymaking.
Key words: Earth Observation; Poverty; Livelihoods; Sustainable Development Goals; Remote Sensing
How to cite: Hargreaves, P. and Watmough, G.: Using Earth Observations to engender a social-ecological systems perspective on rural livelihoods and wellbeing., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11222, https://doi.org/10.5194/egusphere-egu2020-11222, 2020.
The Europlanet-2020 programme, which ended Aug 2019, included an activity called VESPA (Virtual European Solar and Planetary Access) which focused on adapting Virtual Observatory (VO) techniques to handle Planetary Science data. We will present some aspects of VESPA at the end of this 4-years development phase and at the onset of the newly selected Europlanet-2024 programme in Feb 2020. VESPA currently distributes 54 data services which are searchable according to observing conditions and encompass a wide scope including surfaces, atmospheres, magnetospheres and planetary plasmas, small bodies, heliophysics, exoplanets, and lab spectroscopy. Versatile online visualization tools have been adapted for Planetary Science, and efforts were made to connect the Astronomy VO with related environments, e.g., GIS for planetary surfaces. The new programme will broaden and secure the former “data stewardship” concept, providing a handy solution to Open Science challenges in our community. It will also move towards a new concept of “enabling data analysis”: a run-on-demand platform will be adapted from another H2020 programme in Astronomy (ESCAPE); VESPA services will be made ready to use for Machine Learning and geological mapping activities, and will also host selected results from such analyses. More tutorials and practical use cases will be made available to facilitate access to the VESPA infrastructure.
VESPA portal: http://vespa.obspm.fr
The Europlanet 2020/2024 Research Infrastructure projects have received funding from the European Union's Horizon 2020 research and innovation programme under grant agreements No 654208 and No 871149
How to cite: Erard, S., Cecconi, B., Le Sidaner, P., Rossi, A. P., Rothkaehl, H., and Capria, T.: Planetary Science Virtual Observatory: VESPA/Europlanet outcome and prospects, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17705, https://doi.org/10.5194/egusphere-egu2020-17705, 2020.
GOCI-II (Geostationary Ocean Color Imager II), the successor of GOCI, will be launched in February 2020. And a ground system for GOCI-II has been developed since 2015. Also, new tools should be developed for the scientific analysis and exploitation of GOCI-II data. GDPS (GOCI Data Processing System), a data analysis tool for existing GOCI, has some problems. It only works on Windows and a great deal of effort is required to develop and improve functions for analysis and processing for GOCI data. To solve these problems, we are developing a GOCI-II Toolbox (GTBX) based on SNAP (SeNtinel Application Platform) which is a widely used software platform and an evolution of ESA BEAM/NEST architecture inheriting all current NEST functionality. GOCI Level-1B and Level-2 file format are binary and HDF-EOS5, respectively. And GOCI-II Level-1B and Level-2 file format is NetCDF. The GTBX provides the visualization and analysis of GOCI/GOCI-II data, as well as GOCI-II Level-2 processor for ocean color products including atmospheric correction and application products of ocean, atmosphere and land. Furthermore, the GTBX extends SNAP product library to display the Thematic Realtime Environmental Distributed Data Services (THREDDS) catalogs of GOCI/GOCI-II data and provides remote access to partial data using the Open-Source Project for a Network Data Access Protocol (OPeNDAP).
In terms of the GOCI-II Level-2 processor, algorithms are implemented in Python and C/C ++ and each algorithm application is distributed as a Docker image. So, it can be run in any environment that support the Docker (e.g., Windows, Linux and Mac OS). In addition, we introduce parallel processing methods suitable for each application. In computing environments that support the Open Multi-Processing, Open Computing Language (OpenCL) and Compute Unified Device Architecture (CUDA) libraries, users of GOCI-II data take advantage of the powerful computing resources of multi-core CPU and GPU, and it is possible to process large-scale data at very high speed.
The GTBX work seamlessly with the generic functions of SNAP. By utilizing various visualization and analysis functions of SNAP and adding functions of easy access and powerful processing for GOCI/GOCI-II data, it is expected that the rich utilization of GOCI-II data will be possible.
How to cite: Heo, J.-M., Han, H.-J., Yang, H., Kwak, S., and Lee, T.: Development of GOCI-II Toolbox for SNAP, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18334, https://doi.org/10.5194/egusphere-egu2020-18334, 2020.
The shape and location of density anomalies inside the Moon provide insights into processes that produced them and their subsequent evolution. Gravity measurements provide the most complete data set to infer these anomalies on the Moon . However, gravity inversions suffer from inherent non-uniqueness. To circumvent this issue, it is often assumed that the Bouguer gravity anomalies are produced by the relief of the crust-mantle or other internal interface . This approach limits the recovery of 3D density anomalies or any anomaly at different depths. In this work, we develop an algorithm that provides a set of likely three-dimensional models consistent with the observed gravity data with no need to constrain the depth of anomalies a priori.
The volume of a sphere is divided in 6480 tesseroids and n Voronoi regions. The algorithm first assigns a density value to each Voronoi region, which can encompass one or more tesseroids. At each iteration, it can add or delete a region, or change its location [2, 3]. The optimal density of each region is then obtained by linear inversion of the gravity field and the likelihood of the solution is calculated using Bayes’ theorem. After convergence, the algorithm then outputs an ensemble of models with good fit to the observed data and high posterior probability. The ensemble might contain essentially similar interior density distribution models or many different ones, providing a view of the non-uniqueness of the inversion results.
We use the lunar radial gravity acceleration obtained by the GRAIL mission  up to spherical harmonic degree 400 as input data in the algorithm. The gravity acceleration data of the resulting models match the input gravity very well, only missing the gravity signature of smaller craters. A group of models show a deep positive density anomaly in the general area of the Clavius basin. The anomaly is centered at approximately 50°S and 10°E, at about 800 km depth. Density anomalies in this group of models remain relatively small and could be explained by mineralogical differences in the mantle. Major variations in crustal structure, such as the near side / far side dichotomy and the South Pole Aitken basin are also apparent, giving geological credence to these models. A different group of models points towards two high density regions with a much higher mass than the one described by the other group of models. It may be regarded as an unrealistic model. Our method embraces the non-uniqueness of gravity inversions and does not impose a single view of the interior although geological knowledge and geodynamic analyses are of course important to evaluate the realism of each solution.
References:  Wieczorek, M. A. (2006), Treatise on Geophysics 153-193. doi: 10.1016/B978-0-444-53802-4.00169-X.  Izquierdo, K et al. (2019) Geophys. J. Int. 220, 1687-1699, doi: 10.1093/gji/ggz544,  Izquierdo, K. et al., (2019) LPSC 50, abstr. 2157.  Lemoine, F. G., et al. ( 2013), J. Geophys. Res. 118, 1676–1698 doi: 10.1002/jgre.20118.
How to cite: Izquierdo, K., Montesi, L., and Lekic, V.: Inferences of the lunar interior from a probabilistic gravity inversion approach, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22202, https://doi.org/10.5194/egusphere-egu2020-22202, 2020.
The Waves of HIgh frequency and Sounder for Probing Electron density by Relaxation (WHISPER) instrument, is part of the Wave Experiment Consortium (WEC) of the CLUSTER mission. The instrument consists basically of a receiver, a transmitter, and a wave spectrum analyzer. It delivers active (sounding) and natural electric field spectra. The characteristic signature of waves indicates the nature of the ambient plasma regime and, combined with the spacecraft position, reveals the different magnetospheric boundaries and regions. The electron density can be deduced from the characteristics of natural waves in natural mode and from the resonance triggered in the sounding mode. The electron density is a parameter of major scientific interest and is also commonly used for the calibration of the particles instruments.
Until recently, the electron density required a manual intervention consisting in visualizing input parameters from the experiments, such as the WHISPER active/passive spectrograms combined with the dataset from the other instruments onboard CLUSTER.
Work is being carried out to automatize the detection of the electron density using Machine Learning and Deep Learning methods.
To automate this process, knowledge of the region (plasma regime) is highly desirable. In order to try to determinate the different plasma regions, a Multi-Layer Perceptron has been implemented. This model consists of three neuronal network dense layers, with some additional dropout to prevent overfitting. For each detected region, a second Multi-Layer perceptron was implemented to determine the plasma frequency. This model has been trained with 100k spectra using the plasma frequency values manually found. The accuracy can reach until 98% in some plasma regions.
These models of the electron density automated determination are also currently applied on the dataset of the mutual impedance instrument (RPC-MIP) onboarded ROSETTA and will be useful for other space missions such as BepiColombo (especially for PWI/AM2P experiment) or JUICE (RPWI/MIME experiment).
How to cite: Gilet, N., De Leon, E., Jegou, K., Bucciantini, L., Vallières, X., Rauch, J.-L., and Décréau, P.: Automatic detection of the electron density from the WHISPER instrument onboard CLUSTER, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18883, https://doi.org/10.5194/egusphere-egu2020-18883, 2020.
With the ever-growing interest from the general public towards understanding climate science, it is becoming increasingly important that we present this information in ways accessible to non-experts. In this pilot study, we use time series data from the first United Kingdom Earth System model (UKESM1) to create six procedurally generated musical pieces and use them to explain the process of modelling the earth system and to engage with the wider community.
Scientific data is almost always represented graphically either in figures or in videos. By adding audio to the visualisation of model data, the combination of music and imagery provides additional contextual clues to aid in the interpretation. Furthermore, the audiolisation of model data can be employed to generate interesting and captivating music, which can not only reach a wider audience, but also hold the attention of the listeners for extended periods of time.
Each of the six pieces presented in this work was themed around either a scientific principle or a practical aspect of earth system modelling. These pieces demonstrate the concepts of a spin up, a pre-industrial control run, multiple historical experiments, and the use of several future climate scenarios to a wider audience. They also show the ocean acidification over the historical period, the changes in circulation, the natural variability of the pre-industrial simulations, and the expected rise in sea surface temperature over the 20th century.
Each of these pieces were arranged using different musical progression, style and tempo. All six pieces were performed by the digital piano synthesizer, TiMidity++, and were published on the lead author's YouTube channel. The videos all show the progression of the data in time with the music and a brief description of the methodology is posted alongside the video.
To disseminate these works, links to each piece were published on the lead author's personal and professional social media accounts. The reach of these works was also analysed using YouTube's channel monitoring toolkit for content creators, YouTube studio.
How to cite: de Mora, L., Sellar, A., Yool, A., Palmieri, J., Smith, R. S., Kuhlbrodt, T., Parker, R. J., Walton, J., Blackford, J. C., and Jones, C. G.: Earth System Music: music generated from the first United Kingdom Earth System model, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7267, https://doi.org/10.5194/egusphere-egu2020-7267, 2020.
EUMETSAT offers a vast and growing collection of earth observation data produced by over 35 years of operational meteorological satellites. New data products are produced 24/7x365 and consistency with previous satellites and other missions is ensured by intercalibration and reprocessing campaigns. The benefits for the geosciences community are readily apparent - a recent survey showed that EUMETSAT and its Satellite Application Facilities produce 26% of the Essential Climate Variable records identified by the Global Climate Observing System that can be observed from space.
With the advent of new core satellite programmes and many narrowly focused missions, the volume and complexity of the generated data products will increase significantly, making it unfeasible for traditional workflows, relying on accessing data holdings present on the user's premises, to fully exploit these observations.
Users can access EUMETSAT data via two service categories: “push” services, currently provided by EUMETCast Satellite and delivering data to users via satellite systems in near real-time, and “pull” services, currently provided by the Long Term Archive and by the EUMETSAT Visualisation Service (EUMETView). EUMETSAT is in the process of reshaping its data services portfolio by leveraging big data and cloud computing technologies. The new Data Services are being phased into operations during 2020 and address several challenges with using EUMETSAT's data: near real-time data access, accessing time series, viewing data, transforming it to make it compatible with downstream workflows, and processing data on the premises where they are stored.
EUMETSAT has established an on-premises hybrid cloud, in which new Data Services for online data access (Data Store), web map visualisations (View Service) and product format customisations (Data Tailor) are hosted. Additionally, our “push” services are extended, with the introduction of the EUMETCAST Terrestrial service.
The Data Store provides online access for directly downloading satellite data via a web-based user interface and APIs usable in processing chains. Users can download the data in its original format or customise it before download by invoking the Data Tailor Service. The View Service provides access via standard OGC Web Map, Web Coverage and Web Feature Services (WMS, WCS, WFS) which visualise data available in the Data Store. It is accessible via a web-based interface and APIs allowing the integration of visualisations in end-user applications. EUMETCast Terrestrial is an evolution of the EUMETCast Satellite system that relies on the network infrastructure provided by GEANT and its partners plus the Internet to deliver high volumes of data worldwide. EUMETCast Terrestrial is able to deliver data outside the EUMETCast Satellite footprint and to user communities large enough to benefit from a multicast service, but not large enough to justify a full satellite-based broadcast.
This presentation will showcase these new Data Services, which enable users to transition from traditional local data processing regimes to cloud-native research workflows. With the new Data Services, users can easily discover, explore, and tailor data products to their needs and thus shift the effort from data and infrastructure handling to domain-specific and scientific topics.
How to cite: Stoicescu, M., Aubert, G., Borgia, F., Espanyol, O., Higgins, M., Horny, M., Lee, D., Miu, P., Parodi, I., Patchett, A., Renner, K.-P., Rodriguez Guerra, J., Romero, R., Rothfuss, H., Saalmueller, J., Schick, M., Wannop, S., and Wolf, L.: Reducing Time to Results with EUMETSAT's New Data Services, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10267, https://doi.org/10.5194/egusphere-egu2020-10267, 2020.
The Aviation Weather Center (AWC) is part of the US National Weather Service, providing domestic and global aviation weather forecasts and warnings. Forecasters and automated systems disseminate up to date information including through our website, aviationweather.gov. Recent years have seen AWC transition from primarily text-based products to increasing interactive and graphical tools. While nearly all information is presented in two dimensions, recent work has focused on adding three- and four- dimensional visualizations to increase understanding in the user community. Web based technologies allow interrogating large datasets on-the-fly without specific software installed on the client device, while providing a more complete picture and development of conceptual models beyond fixed horizontal slices of airspace.
How to cite: Cross, A.: Visualization Techniques at the Aviation Weather Testbed, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10988, https://doi.org/10.5194/egusphere-egu2020-10988, 2020.
The StraboSpot digital data system is designed to allow researchers to digitally collect, store, and share both field and laboratory data. Originally designed for structural geology field data, StraboSpot has been extended to field-based petrology and sedimentology. Current efforts will integrate micrographs and data related to microscale and experimental rock deformation. The StraboSpot data system uses a graph database, rather than a relational database approach. This approach increases its flexibility and allows the system to track geologically complex relationships. StraboSpot currently operates on two different platform types: (1) a field-based application that functions with or without internet access; and (2) a web interface (Internet-connected settings only).
The data system uses two main concepts - spots and tags - to organize data. A spot consists of a specific area at any spatial scale of observation. Spots are related in a purely spatial manner, and consequently, one spot can enclose multiple other spots that themselves contain spots. Spatial data can thus be tracked from regional to microscopic scale. Tags provide conceptual grouping of spots, allowing linkages between spots that are independent of their spatial position. A simple example of a tag is a geologic unit or formation. Multiple tags can be assigned to any spot, and tags can be assigned throughout a field study. The advantage of tags is their flexibility, in that they can be completely defined by individual scientists. Critically, tags are independent of the spatial scale of the observation. Tags may also be used to accommodate complex and complete descriptions.
The strength of the StraboSpot platform is its flexibility, and that it can be linked to other existing and future databases in order to integrate with digital efforts across the geological sciences. The StraboSpot data system – in coordination with other digital data efforts – will allow researchers to conduct types of science that were previously not possible and allows geologists to join big data initiatives.
How to cite: Newman, J., Walker, J. D., Tikoff, B., and Williams, R.: The StraboSpot data system for geological research, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12692, https://doi.org/10.5194/egusphere-egu2020-12692, 2020.
The European Plate Observing System (EPOS) has established a pan-European infrastructure for solid Earth science data, governed by EPOS ERIC (European Research Infrastructure Consortium). The EPOS-Norway project is funded by the Research Council of Norway (Project no. 245763). The aim of the Norwegian EPOS e‑infrastructure is to integrate data from the Norwegian seismological and geodetic networks, as well as the data from the geological and geophysical data repositories.
We present the EPOS-Norway Portal as an online, open access, interactive tool, allowing visual analysis of multidimensional data. Currently it is providing access to more than 150 datasets (and growing) from four subdomains of Earth science in Norway. Those can be combined with your own data.
The EPOS-N Portal is implemented using Enlighten-web, a web program developed by NORCE. Enlighten-web facilitates interactive visual analysis of large multidimensional data sets. The Enlighten-web client runs inside a web browser. The user can create layouts consisting of one or more plots or views. Supported plot types are table views, scatter plots, vector plots, line plots and map views. For the map views the CESIUM framework is applied. Multiple scatter plots can be mapped on top of these map views.
An important element in the Enlighten-web functionality is brushing and linking, which is useful for exploring complex data sets to discover correlations and interesting properties hidden in the data. Brushing refers to interactively selecting a subset of the data. Linking involves two or more views on the same data sets, showing different attributes. The views are linked to each other, so that selecting a subset in one view automatically leads to the corresponding subsets being highlighted in all other linked views. If the updates in the linked plots are close to real-time while brushing, the user can perceive complex trends in the data by seeing how the selections in the linked plots vary depending on changes in the brushed subset. This interactivity requires GPU acceleration of the graphics rendering. In Enlighten-web, this is realized by using WebGL.
The EPOS-N Portal is accessing the external Granularity Database (GRDB) for metadata handling. Metadata can e.g. specify data sources, services, ownership, license information and data policy. Bar charts can be used for faceted search in metadata, e.g. search by categories. EPOS-N Portal can access remote datasets via web services. Relevant web services include FDSNWS for seismological data and OGC services for geological and geophysical data (e.g. WMS – Web Map Services). Standalone datasets are available through preloaded data files. Users can also simply add another WMS server or upload their own dataset for visualization.
Enlighten–web will also be adapted as a pilot ICS-D (Integrated Core Services - Distributed) for visualization in the European infrastructure. The EPOS ICS-C (Integrated Core Services - Central) is the entry point for users for accessing the e-Infrastructure under EPOS ERIC. ICS-C will let users create and manage workflows that usually include accessing data and services located in the EPOS Thematic Core Services (TCS). The ICS-C and TCSs will be extended with additional computing facilities through the ICS-D concept.
How to cite: Langeland, T., Daae Lampe, O., Fonnes, G., Atakan, K., Michalek, J., Rønnevik, C., Utheim, T., and Tellefsen, K.: EPOS-Norway Portal, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18998, https://doi.org/10.5194/egusphere-egu2020-18998, 2020.