The increasing amount of data from an increasing number of spacecraft in our solar system shouts out for new data analysis strategies. There is a need for frameworks that can rapidly and intelligently extract information from these data sets in a manner useful for scientific analysis. The community is starting to respond to this need. Machine learning, with all of its different facets, provides a viable playground for tackling a wide range of research questions. Algorithms to automatically detect and classify special features in time series data of the solar wind or in 2D images of planetary surfaces are examples of where machine learning approaches can support and improve existing models. Further, modern learning methods can encode properties of interest in lower dimensional space, and thus making them more searchable.
We encourage submissions dealing with machine learning approaches of all levels in planetary sciences and heliophysics. The aim of this session is to provide an overview of the current efforts to integrate machine learning technologies into data driven space research, to highlight state-of-the art developments and to generate a wider discussion on further possible applications of machine learning.
vPICO presentations: Fri, 30 Apr
The surface of Mars is riddled with dunes that form by accumulating sand particles that are carried by the wind. Since dune geometry and orientation adjust in response to prevailing wind conditions, the morphometrics of dunes can reveal information about the winds that formed them.
Previous studies inferred the prevailing local wind direction from the orientation of dunes by manually analyzing spacecraft imagery. However, building a global map remained challenging, as manual detection of individual dunes over the entire Martian surface is impractical. Here, we employ Mask R-CNN, a state-of-the-art instance segmentation neural network, to detect and analyze isolated barchan dunes on a global scale.
We prepared a training dataset by extracting Mars Context Camera (CTX) scenes of dune fields from a global CTX mosaic, as identified in the global dune-fields catalog. Images were cropped and standardized to a resolution of 832x832 pixels, and labeled using Labelbox’s online instance segmentation platform. Image augmentation and weight decay were employed to prevent overfitting during training. By inspecting 100 sample images from the validation database, we find that the network correctly identified ~86% of the isolated dunes, falsely identifying one feature as a barchan dune in a single image.
After dune outlines are detected, they are automatically analyzed to extract the dominant-wind and net sand-flux directions using traditional computer vision techniques. We expect our future surface-wind dataset to serve as a constraint for atmospheric global circulation models to help predict weather events for upcoming in situ mission as well as shed new light on the recent climate history of Mars.
How to cite: Rubanenko, L., Lapotre, M. G. A., Schull, J., Perez-Lopez, S., Fenton, L. K., and Ewing, R. C.: Mapping Surface Winds on Mars from the Global Distribution of Barchan Dunes Employing an Instance Segmentation Neural Network, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-12960, https://doi.org/10.5194/egusphere-egu21-12960, 2021.
Mounds are positive relief features that can be ascribed to a variety of phenomena; they can be related to monogenic edifices due to spring or mud volcanism, rootless cones on top of lava flows, pingos and so on. In the case of sedimentary or spring case of mud extrusion, these mounds can be widespread regionally and/or contained in large complex craters, often in populations of several hundreds or thousands . Previous work on detection of such mounds in the Mars Arabia Terra involved exploiting morphometric parameters and mapping them onto Digital Terrain Models . In this work, we take a step further and develop more general methods to automatically detect them without explicitly defining the topographical features. We achieve this by using a generative framework trained in an adversarial fashion to produce realistic mappings with only a small number of training samples. Further, we introduce a terrain simulator based on this framework that learns the terrain simulation parameters, and allows us to induce domain specific knowledge automatically into the network. Our key results indicate that learning latent representations based on simulations can offer improvements in detection accuracy, while making it more robust to changing terrain scenarios.
How to cite: Julka, S., Granitzer, M., De Toffoli, B., Penasa, L., Pozzobon, R., and Amerstorfer, U.: Generative Adversarial Networks for automatic detection of mounds in Digital Terrain Models (Mars Arabia Terra), EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-9188, https://doi.org/10.5194/egusphere-egu21-9188, 2021.
In the development of Mars climate models, modeling clouds is an important challenge, and especially for CO2 clouds. This is due to the complexity of the atmospheric processes involved that may imply rethinking microphysical theories, but also to the scarcity of observations. In the late 90’s, Mars Orbiter Laser Altimeter was one of the instruments aboard the Mars Global Surveyor spacecraft. Its first goal was to build a precise map of Mars’ topography through laser altimetry but its sensitivity allowed for cloud observations as well . Thus, previous studies (Neumann & al. 2003 Ivanov & Muhlemann 2001) have shown that some laser returns were cloud signatures coming from the atmosphere. However, at that time, the huge amount of data was analysed using simple distinction criteria.
We use K-means clustering algorithms to computationally analyse MOLA data. In order to optimise the method, we first determine the best observed parameters to distinguish the different kinds of returns (surface, noise and clouds). The best number of clusters is determined using three independent methods : elbow, silhouette score and gap statistics. The method is tested on a restricted sample (10 % of the dataset) and then applied to the full raw dataset. Once that cloud cluster identified, we can plot spatial and temporal distributions of the cloud returns and compare them with previous results.
As mentioned by Neumann & al. (2003), the product of surface reflectivity and two-way transmissivity of the atmosphere appears as the best parameter discriminating between surface and cloud returns. A unique number of clusters (6) is identified by all three optimisation methods. Among those clusters, one clearly identifies cloud returns, while others represent noise and surface returns. Our methods allows us to identify more clouds than previous studies. Our cloud distribution remains coherent with the ones given in previous studies, showing the viability of our method. We will present a catalog of cloud returns coming from MOLA data. We are now working to separate different kinds of clouds within these returns (absorptive and reflective clouds, CO2 / water clouds, dust …) using machine learning algorithms and a recent MOLA surface reflectivity map (Heavens & al. 2016).
How to cite: Caillé, V., Määttänen, A., Spiga, A., Falletti, L., and Neumann, G. A.: Cloud Catalog from Mars Orbiter Laser Altimeter / Mars Global Surveyor Data Using Machine Learning Algorithms, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-14672, https://doi.org/10.5194/egusphere-egu21-14672, 2021.
Being the source region of fast solar wind streams, coronal holes are one of the key components which impact space weather. The precise detection of the coronal hole boundary is an important criterion for forecasting and solar wind modeling, but also challenges our current understanding of the magnetic structure of the Sun. We use deep-learning to provide new methods for the detection of coronal holes, based on the multi-band EUV filtergrams and LOS magnetogram from the AIA and HMI instruments onboard the Solar Dynamics Observatory. The proposed neural network is capable to simultaneously identify full-disk correlations as well as small-scale structures and efficiently combines the multi-channel information into a single detection. From the comparison with an independent manually curated test set, the model provides a more stable extraction of coronal holes than the samples considered for training. Our method operates in real-time and provides reliable coronal hole extractions throughout the solar cycle, without any additional adjustments. We further investigate the importance of the individual channels and show that our neural network can identify coronal holes solely from magnetic field data.
How to cite: Jarolim, R., Veronig, A., Hofmeister, S., Heinemann, S., Temmer, M., Podladchikova, T., and Dissauer, K.: Multi-Channel Coronal Hole Detection with Convolutional Neural Networks, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-1490, https://doi.org/10.5194/egusphere-egu21-1490, 2021.
The perplexing mystery of what maintains the solar coronal temperature at about a million K, while the visible disc of the Sun is only at 5800 K, has been a long standing problem in solar physics. A recent study by Mondal et al. (2020, ApJ, 895, L39) has provided the first evidence for the presence of numerous ubiquitous impulsive emissions at low radio frequencies from the quiet sun regions, which could hold the key to solving this mystery. These Weak Impulsive Narrowband Quiet Sun Emissions (WINQSEs) occur at rates of about five hundred events per minute, and their strength is only a few percent of the background steady emission. Based on earlier work with events of larger flux densities and theoretical considerations, WINQSEs are expected to be compact in the image plane. To characterise the spatial structure of WINQSEs, we have developed a pipeline based on an unsupervised machine learning approach. We first identify the boundaries of the radio sun using edge detection techniques, and detect peaks within the solar boundary. Density-Based Spatial Clustering of Application with Noise (DBSCAN), an unsupervised machine learning algorithm, is used to classify the peaks as isolated or clustered. It is also used to find the optimal hyper-parameters for peak-fitting. The peaks are then fit with Gaussian models, and statistical and heuristic filtering criteria are used to obtain robust fits for a subset of these WINQSEs . We find that the vast majority of WINQSEs can be described by well behaved compact Gaussians. By its very design, this approach is focused on morphological characterisation of these weak features and is better suited for identifying them than earlier attempts. We present here our first results of the observed distributions of intensities, sizes and axial ratios of the Gaussian models for WINQSEs arrived at from analysis of multiple independent datasets.
How to cite: Alam, U., Bawaji, S., Mondal, S., and Oberoi, D.: An Unsupervised Machine Learning Pipeline to study the shape of solar WINQSEs , EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-10954, https://doi.org/10.5194/egusphere-egu21-10954, 2021.
Last year we published an automatic method for the automatic classification of the solar wind . We showed that data transformation and unsupervised clustering can be used to classify observations made by the ACE spacecraft. Two data transformation techniques were used: Kernel Principal Component Analysis (KPCA) and Auto-encoder Neural Networks. After data transformation three clustering techniques were tested: k-means, Bayesian Gaussian Mixtures (BGM), and Self-Organizing Maps (SOM). Although the results were very positive we ran into a few difficulties: a) the data from the ACE mission contains a very small population of observations originated at high latitude coronal holes, b) the measured features contain a high degree of intercorrelation, c) the data distribution is compact in the feature space, and d) the final algorithm produces a single categorical class for a single point in time.
In this work we present an improvement of the model that redresses some of the limitations above. We are still making use of the two main features of our previous work, i.e. the data transformation using auto-encoders and the unsupervised classification using SOM. But in the present work: a) we include the analysis of Ulysses data with observations of the solar wind originated at high latitudes; b) we perform a Factor Analysis to reduce the number of features used as inputs; c) we transform windows of time of the multi-variate time series (instead of instantaneous observations) into scalograms using wavelet transformations; d) we apply the variational version of the auto-encoder  to parametrize the scalograms; f) we finally use the SOM to automatically classify the windows of time in different categories.
This method can be adapted to the classification of observations from the Parker Solar Probe and Solar Orbiter missions.
The work presented in this abstract has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 754304 (DEEP-EST, www.deep-projects.eu), and from the European Union's Horizon 2020 research and innovation programme under grant agreement No 776262 (AIDA, www.aida-space.eu).
 Amaya, Jorge, Romain Dupuis, Maria Elena Innocenti, and Giovanni Lapenta. "Visualizing and Interpreting Unsupervised Solar Wind Classifications." arXiv preprint arXiv:2004.13430 (2020).
 Kingma, Diederik P., and Max Welling. "Auto-encoding variational bayes." arXiv preprint arXiv:1312.6114 (2013).
How to cite: Amaya, J., Jamal, S., and Lapenta, G.: Unsupervised Solar Wind Classification Using Wavelet Variational Autoencoders and Self-Organazing Maps, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-12716, https://doi.org/10.5194/egusphere-egu21-12716, 2021.
Interplanetary coronal mass ejections (ICMEs) are one of the main drivers for space weather disturbances. In the past, different machine learning approaches have been used to automatically detect events in existing time series resulting from solar wind in situ data. However, classification, early detection and ultimately forecasting still remain challenges when facing the large amount of data from different instruments. We attempt to further enhance existing convolutional neural network (CNN) models through extending their possibilities to process data from multiple spacecraft and to include a post processing step commonly used in the area of computer vision. Additionally, we make an effort to extend the previously binary classification problem to a multiclass classification, to also include corotating interaction regions (CIRs) into the range of detectable phenomena. Ultimately, we aspire to explore the suitability of several other methods used in time series forecasting, in order to pave the way for the elaboration of an early warning system.
How to cite: Ruedisser, H., Windisch, A., Amerstorfer, U. V., Amerstorfer, T., Moestl, C., and Bailey, R. L.: Automatic Detection and Classification of ICMEs in Solar Wind Data, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-1601, https://doi.org/10.5194/egusphere-egu21-1601, 2021.
GNSS positioning errors, spacecraft operations failures and power outages potentially originate from space weather in general and the solar wind interaction with the geomagnetic field in particular. Depending on the solar wind speed, information from L1 solar wind monitor spacecraft only give a lead time to take safety measures between 20 and 90 minutes. This very short lead time requires end users to have the most reliable warnings when potential impacts will actually occur. In this study we present a machine learning algorithm that is suitable to predict the solar wind propagation delay between Lagrangian point L1 and the Earth. This work introduces the proposed algorithm and investigates its operational applicability to a realtime scenario.
The propagation delay is measured from interplanetary shocks passing the Advanced Composition Explorer (ACE) first and their sudden commencements within the magnetosphere later, as recorded by ground-based magnetometers. Overall 380 interplanetary shocks with data ranging from 1998 to 2018 builds up the database that is used to train the machine learning model. We investigate two different feature sets. The training of one machine learning model DSCOVR real time solar wind (RTSW) like data which contains all three components of solar wind speed is used. For the other machine learning model ACE RTSW like data which only provide bulk solar wind speed will be used for training. Both feature sets also contain the position of the spacecrafts. The performance assessment of the machine learning model is examined on the basis of a 10-fold cross-validation. The major advantage of the machine learning approach is its simplicity when it comes to its application. After training, values for the different features have to be fed into the algorithms only and the evaluation of the propagation delay can be continuous.
Both machine learning models will be validated against a simple convective solar wind propagation delay model as it is also used in operational space weather centers. For this purpose time periods will be investigated where L1 spacecraft and Earth satellites just outside the magnetosphere probe the same features of the interplanetary magnetic field. This method allows a detailed validation of the solar wind propagation delay apart from the technique that relies on interplanetary shocks.
How to cite: Baumann, C. and McCloskey, A. E.: Solar wind propagation delay predictions between L1 and Earth based on machine learning, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-9624, https://doi.org/10.5194/egusphere-egu21-9624, 2021.
During its 2011-2015 lifetime the MESSENGER spacecraft completed more than 4000 orbits around Mercury, producing vast amounts of information regarding the planetary magnetic field and magnetospheric processes. During each orbit the spacecraft left and re-entered the Hermean magnetosphere, giving us information about more than 8000 crossings of the bow shock and the magnetopause of Mercury's magnetosphere. The information obtained from the magnetometer data offers the possibility to study in depth the structures of the bow shock and magnetopause current sheets and their shapes. In this work, we take a step in this direction by automatically detecting the crossings of bow-shock and magnetopause. To this end, we propose a five-class problem and train a Convolutional Neural Network based classifier using the magnetometer data. Our key experimental results indicate that an average precision and recall of at least 87% and 96% can be achieved on the bow hock and magnetopause crossings by using only a small subset of the data. We also model the average three-dimensional shape of these boundaries depending on the external interplanetary magnetic field . Furthermore, we attempt to clarify the dependence of the two boundary locations on the heliocentric distance of Mercury and on the solar activity cycle phase. This work may be of particular interest for future Mercury research related to the BepiColombo spacecraft mission, which will enter Mercury’s orbit around December 2025.
How to cite: Lavrukhin, A., Parunakian, D., Nevskiy, D., Julka, S., Granitzer, M., Windisch, A., Möstl, C., Reiss, M. A., Bailey, R. L., and Amerstorfer, U.: Automatic detection of magnetopause and bow shock crossing signatures in MESSENGER magnetometer data using Convolutional Neural Networks., EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-12703, https://doi.org/10.5194/egusphere-egu21-12703, 2021.
The surface of Mercury has been mapped in the 400–1145 nm wavelength range by the Mercury Atmospheric and Surface Composition Spectrometer (MASCS) instrument during orbital observations by the MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) spacecraft.
Under the hypothesis that surface compositional information can be efficiently derived from spectral reflectance measurements with the use of machine learning techniques, we have conducted unsupervised hierarchical clustering analyses to identify and characterize spectral units from MASCS observations.
We apply our analysis on the latest MESENGER data delivery to PDS including the new spectral photometric correction , finding result consistent with our previous analysis based on our custom photometric effect removal.
The input is a global hyperspectral data cube image of normalized MASCS visible (VIS) detector spectra, from the first Earth year of the orbital mission. Data coverage varies from region to region, but global maps at 1 degree/pixel can be obtained with a high signal-to-noise ratio (SNR). The resultant hyperspectral map was then visually inspected to search for anomalies that originated mainly in regions of low coverage or from high levels of spectral variation within a single pixel.
Our approach consist of several steps:
1. Data cleaning step: remove data artifact.
2. Independent Component Analysis (ICA): features compression and undelyng signal demixing.
3. Manifold learning : embedding of data in a low dimensional space via UMAP.
4. Hierarchical clustering : creation of spectrally similar partition and projection on the surface with comparison to existing human generated classifications.
We found the existence of two large and spectrally distinct regions, which we call the polar spectral unit (PSU) and the equatorial spectral unit (ESU).
The spatial extent of the polar unit in the northern hemisphere generally correlates well with that of the northern volcanic plains.
Further analysis indicates the presence of smaller sub-units that lie near the boundaries of these large regions and may be transitional areas of intermediate spectral characters.
How to cite: D'Amore, M., Helbert, J., Alessandro, M., and Indhu, V.: Unsupervised classification of Mercury’S Visible–Near-Infrared MASCS/MESSENGER reflectance spectra for automated surface mapping., EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-2661, https://doi.org/10.5194/egusphere-egu21-2661, 2021.
Ultra-low frequency (ULF) magnetospheric plasma waves play a key role in the dynamics of the Earth’s magnetosphere and, therefore, their importance in Space Weather studies is indisputable. Magnetic field measurements from recent multi-satellite missions are currently advancing our knowledge on the physics of ULF waves. In particular, Swarm satellites have contributed to the expansion of data availability in the topside ionosphere, stimulating much recent progress in this area. Coupled with the new successful developments in artificial intelligence, we are now able to use more robust approaches for automated ULF wave identification and classification. The goal of this effort is to use a machine learning technique to classify ULF wave events using magnetic field data from Swarm. We construct a Convolutional Neural Network that takes as input the wavelet power spectra of the Earth’s magnetic field variations per track, as measured by each one of the three Swarm satellites, aiming to classify ULF wave events in four categories: Pc3 wave events, background noise, false positives, and plasma instabilities. Our primary experiments show promising results, yielding successful identification of 90% accuracy. We are currently working on producing larger datasets, by analyzing Swarm data from mid-2014 onwards, when the final constellation was formed.
How to cite: Antonopoulou, A., Balasis, G., Papadimitriou, C., Boutsi, Z., Giannakis, O., Koutroumbas, K., and Rontogiannis, A.: A Machine Learning technique for ULF wave classification in Swarm magnetic field measurements, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-8404, https://doi.org/10.5194/egusphere-egu21-8404, 2021.
Lightning whistler waves, as an important tool for geospace exploration, can be found from the vast amount of electromagnetic satellite data. In recent years, with the development of computer vision and deep learning technologies, some advanced algorithms have been developed to automatically identify lightning whistler waves from the massive archived data of electromagnetic satellites. However, these algorithms fail to automatically extract the dispersion coefficients of lightning whistlers(DCW). Since the DCW are depended on the propagation path of lightning and geospace environments, it is extremely important for further geospace exploration.
We proposed an algorithm that can automatically extract the dispersion coefficients of lightning whistler: (1) using two seconds time window on the SCM VLF data from the ZH-1 satellite to obtain segmented data; (2) generating time-frequency profile (TFP) of the segmented waveform by performing a band-pass filter and the short-time Fourier transform with a 94% overlap; (3) annotating the ground truth of the whistler with the rectangular boxes on the each time-frequency image to construct the training dataset; (4) building the YOLOV3 deep neural network and setting the training parameters; (5) inputting the training dataset to the YOLOV3 to train the whistler recognition model; (6) detecting the whistler from the unknown time-frequency image to extract the whistler area with the rectangle box as a sub-image; (7) conducting the BM3D algorithm to denoise the sub-image; (8) employing an adaptive threshold segmentation algorithm on the denoised sub-image to obtain the binary image which represents the whistler trace with the black pixel and other area with white pixel. (9) removing isolated points in the binary image with the open operation in morphology; (10) extracting lightning whistler trajectory region using connected domain analysis; (11) converting the trajectory coordinates from (t-f) to (f-0.5-t); (12) taking into account the Eckersley formula, which depicts the relationship between the scattering coefficient and the time frequency, we use the least-squares method on the converted trajectory coordinates to fit a straight line and obtain the slope of the line as the dispersion coefficient.
In order to evaluate the effectiveness of the proposed algorithm, we construct two dataset: a simulation set and an observational dataset. The simulation set is composed of 1000 lightning whistler trajectories, which are generated according to the Eckersley formula. The observational dataset containing 1000 actual single-trace lightning whistler, are generated by collecting the data from the SCM VLF from the ZH-1 satellite. The experiment results show that: the mean-square error on the simulation set is below 2.8x 10-4; The mean-square error on the observational dataset is below 2.1054x10-3.
Keywords: ZH-1 Satellite, SCM, Lightning Whistler, YOLOV3, Dispersion Coefficients
How to cite: Yuan, J., Zhou, L., Wang, Q., Yang, D., Zima, Z., Han, Y., Wang, Z., Shen, X., and Hu, L.: Automatic Extraction of the Dispersion Coefficients of Lightning Whistler Waves Observed By SCM Boarded On ZH-1 Satellite, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-10477, https://doi.org/10.5194/egusphere-egu21-10477, 2021.
Lightning whistlers, found frequently in electromagnetic satellite observation, are the important tool to study electromagnetic environment of the earth space. With the increasing data from electromagnetic satellites, a considerable amount of time and human efforts are needed to detect lightning whistlers from these tremendous data. In recent years, algorithms for lightning whistlers automatic detection have been conducted. However, these methods can only work in the time-frequency profile (image) of the electromagnetic satellites data with two major limitations: vast storage memory for the time-frequency profile (image) and expensive computation for employing the methods to detect automatically the whistler from the time-frequency profile. These limitations hinder the methods work efficiently on ZH-1 satellite. To overcome the limitations and realize the real-time whistler detection automatically on board satellite, we propose a novel algorithm for detecting lightning whistler from the original observed data without transforming it to the time-frequency profile (image).
The motivation is that the frequency of lightning whistler is in the audio frequency range. It encourages us to utilize the speech recognition techniques to recognize the whistler in the original data \of SCM VLF Boarded on ZH-1. Firstly, we averagely move a 0.16 seconds window on the original data to obtain the patch data as the audio clip. Secondly, we extract the Mel-frequency cepstral coefficients (MFCCs) of the patch data as a type of cepstral representation of the audio clip. Thirdly, the MFCCs are input to the Long Short-Term Memory (LSTM) recurrent neutral networks to classification. To evaluate the proposed method, we construct the dataset composed of 10000 segments of SCM wave data observed from ZH-1 satellite(5000 segments which involving whistler and 5000 segments without any whistler). The proposed method can achieve 84% accuracy, 87% in recall, 85.6% in F1score.Furthermore, it can save more than 126.7MB and 0.82 seconds compared to the method employing the YOLOv3 neutral network for detecting whistler on each time-frequency profile.
Key words: ZH-1 satellite, SCM,lightning whistler, MFCC, LSTM
How to cite: Yuan, J., Wang, Z., Yang, D., Wang, Q., Zima, Z., Han, Y., Zhou, L., Shen, X., and Guo, Q.: Automatic Recognition of the Lighting Whistler waves from the Wave Data of SCM Boarded on ZH-1 satellite, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-11024, https://doi.org/10.5194/egusphere-egu21-11024, 2021.
The NASA InSight lander is a geophysical and meteorological observatory operating on Mars for over a Martian year/two Earth years. Continuous records of seismic, pressure, wind and temperature data over this period have led to significant breakthroughs in determining the planet's structure and climate. With such a wealth of data now received, machine learning offers a nascent tool to extract further information.
The seismic data is extremely correlated to the atmospheric conditions. Discerning the coupling between the atmosphere and ground motion is of significant interest and this work aims to predict the ground motion generated by wind and pressure forcing using machine learning techniques. From this prediction we can untangle the various contributions to ground motion, determine atmospheric/ground properties, analyse/discriminate marsquakes and potentially decorrelate waveforms to remove the atmospheric contribution. While a physical model for this atmospheric forcing is desirable, machine learning approaches the problem from an alternative view point where mathematical and algorithmic tools add the necessary complexity for fitting the data. In this way, we may be able to capture detailed variation and inform further modelling efforts.
We will detail the initial application of machine learning for predicting the ground motion from the atmospheric data inputs of wind speed, wind direction, pressure and temperature. First though, we will describe the issues that need to be tackled to obtain a good prediction using the InSight data. To illustrate some of some of these problems, consider that glitches are known to occur in the seismic data. They offer a way to detect overfitting as they should not in general be predicted from atmospheric forcing. However, a subset of the glitches are correlated to temperature on top of the fact they are only visible during quiet enough periods, as are marsquakes. Therefore, they are not a normally distributed source of noise or uncorrelated from the input atmospheric data, breaking typical assumptions used for regression. A similar issue is presented by the changing weather conditions throughout a Martian sol, where the time series distribution varies. As a result, prior information on the instrumentation and data qualities is essential for applying the machine learning methods and interpretation of the results.
We demonstrate the specifics of the InSight data with respect to 1) how a curve fitting problem can be constructed, 2) the necessary degrees of freedom of the problem, 3) consideration of non-stationary/heteroscedastic errors and 4) the optimisation and machine learning method applied. Current results will be presented from the implementation of random forests and gaussian processes. These results demonstrate a good performance so far for capturing the global variation and we will offer perspectives on how these results can be used and improved.
How to cite: Stott, A. E., Garcia, R. F., Pinot, B., Murdoch, N., Mimoun, D., Spiga, A., Banfield, D., Navarro, S., Mora-Sotomayor, L., Charalambous, C., Pike, W. T., Lognonné, P., and Horleston, A.: Atmospherically driven ground motion at InSight: a machine learning perspective, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-12344, https://doi.org/10.5194/egusphere-egu21-12344, 2021.
Orbital Laser altimeters deliver a plethora of data that is used to map planetary surfaces  and to understand interiors of solar system bodies . Accuracy and precision of laser altimetry measurements depend on the knowledge of spacecraft position and pointing and on the instrument. Both are important for the retrieval of tidal parameters. In order to assess the quality of the altimeter retrievals, we are training and implementing an artificial neural network (ANN) to identify and exclude scans from analysis which yield erroneous data. The implementation is based on the PyTorch framework . We are presenting our results for the MESSENGER Mercury Laser Altimeter (MLA) data set , but also in view of future analysis of the BepiColombo Laser Altimeter (BELA) data, which will arrive in orbit around Mercury in 2025 on board the Mercury Planetary Orbiter [5,6]. We further explore conventional methods of error identification and compare these with the machine learning results. Short periods of large residuals or large variation of residuals are identified and used to detect erroneous measurements. Furthermore, long-period systematics, such as those caused by slow variations in instrument pointing, can be modelled by including additional parameters.
 Zuber, Maria T., David E. Smith, Roger J. Phillips, Sean C. Solomon, Gregory A. Neumann, Steven A. Hauck, Stanton J. Peale, et al. ‘Topography of the Northern Hemisphere of Mercury from MESSENGER Laser Altimetry’. Science 336, no. 6078 (13 April 2012): 217–20. https://doi.org/10.1126/science.1218805.
 Thor, Robin N., Reinald Kallenbach, Ulrich R. Christensen, Philipp Gläser, Alexander Stark, Gregor Steinbrügge, and Jürgen Oberst. ‘Determination of the Lunar Body Tide from Global Laser Altimetry Data’. Journal of Geodesy 95, no. 1 (23 December 2020): 4. https://doi.org/10.1007/s00190-020-01455-8.
 Paszke, Adam, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, et al. ‘PyTorch: An Imperative Style, High-Performance Deep Learning Library’. Advances in Neural Information Processing Systems 32 (2019): 8026–37.
 Cavanaugh, John F., James C. Smith, Xiaoli Sun, Arlin E. Bartels, Luis Ramos-Izquierdo, Danny J. Krebs, Jan F. McGarry, et al. ‘The Mercury Laser Altimeter Instrument for the MESSENGER Mission’. Space Science Reviews 131, no. 1 (1 August 2007): 451–79. https://doi.org/10.1007/s11214-007-9273-4.
 Thomas, N., T. Spohn, J. -P. Barriot, W. Benz, G. Beutler, U. Christensen, V. Dehant, et al. ‘The BepiColombo Laser Altimeter (BELA): Concept and Baseline Design’. Planetary and Space Science 55, no. 10 (1 July 2007): 1398–1413. https://doi.org/10.1016/j.pss.2007.03.003.
 Benkhoff, Johannes, Jan van Casteren, Hajime Hayakawa, Masaki Fujimoto, Harri Laakso, Mauro Novara, Paolo Ferri, Helen R. Middleton, and Ruth Ziethe. ‘BepiColombo—Comprehensive Exploration of Mercury: Mission Overview and Science Goals’. Planetary and Space Science, Comprehensive Science Investigations of Mercury: The scientific goals of the joint ESA/JAXA mission BepiColombo, 58, no. 1 (1 January 2010): 2–20. https://doi.org/10.1016/j.pss.2009.09.020.
How to cite: Stenzel, O., Thor, R., and Hilchenbach, M.: Error identification in orbital laser altimeter data by machine learning, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-14749, https://doi.org/10.5194/egusphere-egu21-14749, 2021.
We are sorry, but presentations are only available for users who registered for the conference. Thank you.
We are sorry, but presentations are only available for users who registered for the conference. Thank you.