Acquisition and processing of gravity and magnetic field data and their integrative interpretation
Gravity and magnetic field data contribute to a wide range of geo-scientific research, from imaging the structure of the earth and geodynamic processes (e.g. mass transport phenomena or deformation processes) to near surface investigations. The session is dedicated to contributions related to spatial and temporal variations of the Earth gravity and magnetic field at all scales. Contributions to modern potential field research are welcome, including instrumental issues, data processing techniques, interpretation methods, innovative applications of the results and data collected by modern satellite missions (e.g. GOCE, GRACE, Swarm), potential theory, as well as case histories.
The GOCE satellite mission of the European Space Agency has delivered an unprecedented view of the gravity field of the Earth. In this data set, the strongest gravity gradient signals are observed at subduction zones in the form of a dipole. Despite numerous studies on subduction zones, it is still unclear what is causing this strong signal. Is the source of the observed dipole situated in the crust, mantle, or a combination of these?
We have constructed a 3D geometry of the Sumatra slab using the global SLAB1.0 model. This geometry is substituted in a global upper mantle model WINTERC5.4, a product of the ESA Support to Science Element: 3DEarth. The density in the subducting crust, mantle, or a combination of both is fitted to the gravity gradients at satellite height. Lateral varying Green’s functions are used to compute the gravity gradients from the densities. In the case of a combined crust/mantle model, spectral information of the sensitivity of satellite gradients is used to construct a weighted inversion.
Preliminary results show that crustal mass transport (mostly from the overriding plate) in the direction of the subducting plate is mostly responsible for the negative anomaly observed in between the trench and the volcanic arc. This signal is, however, not visible along the complete subduction zone. Most crustal transport is seen where normal subduction takes place. Oblique subduction shows less crustal transport and more intra-crustal faulting. The satellite gravity gradients show high sensitivity to this particular crustal signature and therefore can be used to analyze subduction zones globally.
How to cite:
Root, B., van der Wal, W., and Fullea, J.: Decoupling the crustal and mantle gravity signature at subduction zones with satellite gravity gradients: case study Sumatra , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6625, https://doi.org/10.5194/egusphere-egu2020-6625, 2020
Arcangela Bollino, Anna Maria Marotta, Federica Restelli, Alessandro Regorda, and Roberto Sabadini
Subduction is responsible for surface displacements and deep mass redistribution. This rearrangement generates density anomalies in a wide spectrum of wavelengths which, in turn, causes important anomalies in the Earth's gravity field that are visible as lineaments parallel to the arc-trench systems. In these areas, when the traditional analysis of the deformation and stress fields is combined with the analysis of the perturbation of the gravity field and its slow time variation, new information on the background environment controlling the tectonic loading phase can be disclosed.
Here we present the results of a comparative analysis between the geodetically retrieved gravitational anomalies, based on the EIGEN-6C4 model, and those predicted by a 2D thermo-chemical mechanical modeling of the Sumatra and Mariana complexes.
The 2D model accounts for a wide range of parameters, such as the convergence velocity, the shallow dip angle, the different degrees of coupling between the facing plates. The marker in cell technique is used to compositionally differentiate the system. Phase changes in the crust and in the mantle and mantle hydration are also allowed. To be compliant with the geodetic EIGEN-6C4 gravity data, we define a model normal Earth considering the vertical density distribution at the margins of the model domain, where the masses are not perturbed by the subduction process.
Model predictions are in good agreement with data, both in terms of wavelengths and magnitude of the gravity anomalies measured in the surroundings of the Sumatra and Marina subductions. Furthermore, our modeling supports that the differences in the style of the gravity anomaly observed in the two areas are attributable to the different environments – ocean-ocean or ocean-continental subduction – that drives a significantly different dynamic in the wedge area.
How to cite:
Bollino, A., Marotta, A. M., Restelli, F., Regorda, A., and Sabadini, R.: New insights on the dynamics of the Sumatra and Mariana complexes inferred from the comparative analysis of gravity data and model predictions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7399, https://doi.org/10.5194/egusphere-egu2020-7399, 2020
Paolo Mancinelli, Cristina Pauselli, Dominique Fournier, Maurizio Fedi, Giorgio Minelli, and Massimiliano Barchi
In this work, the crustal volume struck by the 2016-2017 seismic sequence in Central and Northern Apennines is investigated using constrained 3D inversion of the Bouguer anomaly. After a preliminary regional field removal the residual dataset is then inverted into a 3D density contrast model. With an increasing complexity in the reference geometries, we test different geological scenarios and software settings. Geometries used in the reference models were retrieved from the available geological and geophysical information in the area. Starting with a reference model encompassing turbidites, carbonates and evaporites, and basement we finally test the effects of a low-density layer at the top of the basement. The retrieved density distribution with depth is compatible with previous models. Moreover, results support the hypothesis based on borehole evidence, of a low-density upper basement across the entire area, possibly phyllitic in composition. Comparison of the resulting models with the spatial distribution at depth of M>3 seismic events between August and November 2016, allows to locate volumes with the higher concentration of seismic events. Both at shallow and deep locations, the majority of the events enucleated in volumes relatively denser while deeper events occur in a region of major density change corresponding to the top of the basement.
How to cite:
Mancinelli, P., Pauselli, C., Fournier, D., Fedi, M., Minelli, G., and Barchi, M.: 3D gravity inversion across the area struck by the 2016-2017 seismic events in Central and Northern Apennines, Italy, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1035, https://doi.org/10.5194/egusphere-egu2020-1035, 2019
The North China Craton (NCC) is one of the oldest cratons in the world. Currently, the destruction mechanism and geodynamics of the NCC still remain controversial. All of the proposed views regarding the issues involve studying the internal density structure of the NCC lithosphere. Gravity field data are one of the most important data in regard to investigating the lithospheric density structure, the gravity gradient data and the gravity data possess their own advantages. Given the inconsistency of the on orbit GOCE satellite gravity gradient and surface gravity observation plane height, also effects of the initial density model upon of the inversion results, the joint inversion of gravity gradient and gravity are divided into two integrated processes. By using the preconditioned conjugate gradient (PCG) inversion algorithm, the density data are calculated using the preprocessed remaining gravity anomaly data. The newly obtained high resolution density data are then used as the initial density model, which can be served as the constraints for the subsequent gravity gradient inversion. Downward continuation, terrain correction, interface undulation correction and long wavelength correction are performed for the four gravity gradient tensor data(Txx，Txz，Tyy，Tzz)of the Gravity field and steady-state Ocean Circulation Explorer (GOCE) satellite, after which the remaining gravity gradient anomaly data(T'xx，T'xz，T'yy，T'zz) are used as the new observation quantity. Finally, the ultimate lithospheric density distribution within the depth range of 0–180 km in the NCC is obtained using the same PCG algorithm.
How to cite:
Tian, Y. and Wang, Y.: Joint inversion of the lithospheric density struture in the North China Craton based on GOCE satellite gravity gradient data and surface gravity data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3977, https://doi.org/10.5194/egusphere-egu2020-3977, 2020
Multidisciplinary research shows the Red river fault zone’s (RRFZ) present movement and deformation state has complex segmentation feature. In order to further reveal its deep deformation mode, firstly, we extract tectonic movement gravity change information from mobile gravity measurement data by remove water storage varation and Vertical movement gravity effect; Secondly, together crust density interfaces model with gravity change information, then we can get the NMRFZ’s deformation mode of deep crust, which causes gravity variation.
The average effect with a 50km radius is calculated for the recent gravity change rate in the Sichuan-Yunnan region, then the background rate field and the residual gravity change rate field are obtained. The trend of -0.66μGal/yr gravity-low-speed change in Sichuan-Yunnan region indicates that there is an inheritance between the gravity field and the uplifting background of the southeastern Tibetan Plateau. The crustal uplift is an important reason for the negative surface gravity changes, but it is mainly related to the deep tectonic environment. There are local positive change zones in the block boundary area, with obvious lateral extrusion and deep mass accumulation. It reflects that under the dynamic environment of the eastward flow of the Tibetan Plateau, the crust of north and middle-south section of the RRFZ are extruded and the underground mass become densification which make the surface gravity raising. The positive gravity changes in up-middle crust are more obvious than lower crust and Moho in Sichuan-Yunnan area. The RRFZ also exhibits a strong demarcation feature as a plate boundary, and the northern segment is the dividing line of gravity positive and negative changes area, while the middle-southern segment and its two sides also showed a wide range of positive change trends, with deep mass continue accumulation.
The results of crustal deep deformation show that both the upper and the lower crust are obviously demarcated along the 101.5°E boundary, with the west side of the southwest Yunnan descending (moho: -0.05m/yr, upper-middle crust: -0.03m/yr) and east side of Sichuan-Yunnan block rising (moho ： 0.05m/yr，upper-middle crust： 0.02m/yr), which shows that the control effects in depth of the Kangdian crustal axis. The deformation rate of the deep crust in the RRFZ is the largest, the middle-south is next and the south the smallest. Gradual zone between the middle-south segment of the RRFZ and the Chuxiong-Jianshui fault zone shows strong activity and difference in the upper middle crust.
How to cite:
Wang, J.: Study on present deep crust deformation in northern and middle of the Red river fault zone by gravity method, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7917, https://doi.org/10.5194/egusphere-egu2020-7917, 2020
Guangdong Zhao, Jianxin Liu, Bo Chen, and Mikhail K. Kaban
The Tibetan Plateau, known as the roof of the Earth, is considered as the “Golden Key” for understanding plate tectonics, continental collisions and continental orogenic formation. A reliable Moho structure is also vital for understanding the deformation mechanism of the Tibetan Plateau.
In this study, we use improved Parker−Oldenburg’s formulas that include a reference depth into the exponential term and employ a Gauss-FFT method to determine Moho depths beneath the Tibetan Plateau. The synthetic models demonstrate that the improved Parker’s formula has higher accuracy with the maximum absolute error less than 0.25 mGal.
Two inversion parameters, namely the reference depth and the density contrast are essential for the Moho estimation based on the gravity field, and they need to be determined in advance to obtain correct results. Therefore, the Moho estimates derived from existing seismic studies (Stolk et al., 2013) are used to reduce the non-uniqueness of the gravity inversion and to determine these parameters by searching for the maximum correlation between the gravity-inverted and seismic-derived Moho depths.
Another critical issue is to remove beforehand the gravity effects of other factors, which affect the observed gravity field. In addition to the topography, the gravity effects of the sedimentary layer and crystalline crust are removed based on existing crustal models, while the upper mantle impact is determined based on the seismic tomography model.
The inversion results show that the Moho structure under the Tibetan plateau is very complex with the depths varying from about 30 ~ 40 km in the surrounding basins (e.g., the Ganges basin, the Sichuan basin, and the Tarim basin) to 60 ~ 80 km within the plateau. This considerable difference up to 40 km on the Moho depth reveals the substantial uplift and thickening of the crust in the Tibetan Plateau.
Furthermore, two visible “Moho depression belts” are observed within the plateau with the maximum Moho deepening along the Indus-Tsangpo Suture and along the northern margin of Tibet bounding the Tarim basin with the relatively shallow Moho in central Tibet between them. The southern “belt” is likely formed in compressional environment, where the Indian plate underthrusts northwards beneath the Tibetan Plateau, while the northern one could be formed by the southward underthrust of the Asian lithosphere beneath Tibet.
Stolk, W., Kaban, M., Beekman, F., Tesauro, M., Mooney, W. D., & Cloetingh, S. (2013). High resolution regional crustal models from irregularly distributed data: Application to Asia and adjacent areas. Tectonophysics, 602, 55-68. https://doi.org/10.1016/j.tecto.2013.01.022
How to cite:
Zhao, G., Liu, J., Chen, B., and K. Kaban, M.: Moho beneath Tibet based on a joint analysis of gravity and seismic data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7340, https://doi.org/10.5194/egusphere-egu2020-7340, 2020
Nils-Peter Finger, Mikhail Kaban, Magdala Tesauro, Carina Haeger, Walter Mooney, and Maik Thomas
We present an integrated model of the cratonic lithosphere of South America. Gravity and seismic data were jointly analyzed using mineral physics constraints to assess state and evolution of the cratonic roots in South America in terms of temperature, density and composition. At the cratons, our model enables separation of two counteracting effects: the increased density due to cooling with age and decreased density due to depletion of iron. The depletion of iron can be described by the Mg# which gives the partition of Mg2+ among the double positive ions. A new crustal model (including depth to the Moho) based on existing seismic data was used to correct the gravity field for crustal effects and to uncover the gravity signal of the mantle. In addition, residual topography was calculated as a measure of the part of topography not balanced by the crustal density variations and depth to the Moho. Temperatures within the lithospheric mantle were estimated based on seismic velocities and mineral physics equations, initially assuming a juvenile mantle composition (Mg# of 89). The residual fields were corrected for the respective effects. In the following inversion of residual gravity and topography, we have determined additional density variations which can be interpreted as compositional ones. Furthermore, these results were employed to recompute the upper mantle temperatures taking into account possible compositional changes in the cratonic roots. In this iterative procedure, a consistent thermo-compositional model of the upper mantle has been obtained. Negative compositional density variations imply depletion of iron, leading to higher Mg#s. The highest depletion occurs in the Amazonas and São Francisco Cratons reaching values in the cratons’ centers of up to 90 (Mg#). At the same time, their centers show very low temperatures, down to 600° C in the depth of 100 km. They stay below 1300° C even at a depth of 200 km, indicating deep lithospheric roots. Higher temperatures are found in the Andean forelands and along the Trans-Brasiliano-Lineament (TBL), dividing the Amazonas and São Francisco Cratons. Compositional density variations yield smaller to no amounts of depletion in the Amazonas Craton below a depth of 100 km. The São Francisco Craton still shows depletion in 200 km depth (Mg# up to 89.5). Slightly negative compositional density variations southwest of the São Francisco Craton also exist at depths up to 200 km, indicating the Paranapanema cratonic fragment.
How to cite:
Finger, N.-P., Kaban, M., Tesauro, M., Haeger, C., Mooney, W., and Thomas, M.: A Consistent Thermo-Compositional Model of the South American Cratonic Lithosphere from Integrated Inversion of Gravity and Seismic Data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22408, https://doi.org/10.5194/egusphere-egu2020-22408, 2020
Inverting satellite and airborne magnetic data with a common model is challenging due to the spectral gap between the data sets, but needed to provide meaningful models of lithospheric magnetisation.
Here, we present a step-wise approach, where first spherical prisms (tesseroids) are used for global magnetic inversion of satellite-acquired lithospheric field models and second airborne data re inverted in their suitable spectral range for added details. For the synthetic test, the susceptibility model of Hemant (2003) was used as a starting point to calculate the spherical harmonic model of each tesseroid in the model. The resulting spherical harmonic coefficients were inverted for magnetic susceptibility in the global model, where the geometry is based on seismic or gravity observations. The projected gradient method is used to avoid negative susceptibilities in the result. After the global inversion, high-resolution local tile-wise inversion together with synthetic airborne data within a different wavelength range is performed for even higher resolution results.
The approach is applied to the Swarm-derived LCS-1 field model and for selected areas with high-resolution aeromagnetic coverage.
How to cite:
Baykiev, E. and Ebbing, J.: Global and local high-resolution magnetic field inversion using spherical harmonic models of individual sources, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-170, https://doi.org/10.5194/egusphere-egu2020-170, 2019
Ludovic Jeanniot, Cedric Thieulot, Bart Root, John Naliboff, and Wim Spakman
The mass-density distribution of the Earth drives mantle convection and plate tectonics but is poorly known. We aim to predict gravity fields as a constraint for geodynamical modelling. In order to compute synthetic Earth gravity one must define a spherical geometry filled with a density model. Density models for the whole mantle down to the CMB come from tomographic models which therefore require converting speed waves velocities to density using a scaling factor.
We use a discretised integration method to compute globally gravity acceleration, gravity anomalies, potential and gradients, in the state of the art finite element code ASPECT.
Three density models are tested separately: a density field obtained from SL2013 and S40RTS tomographic models for the deep mantle, and the density model CRUST1.0 for the thin upper lithosphere layer. We combine these 3 datasets into one to create a composite model which is compared to the global seismic model LLNL-G3D-JPS of Simmons et al. (2015). We test the sensitivity of gravity prediction on the use of various conversion scaling factors of shear wave velocity to density. We find that the scaling factor profile also has a major impact on gravity prediction.
Finally, we present early results of the gravity field prediction for two local areas, the Indian-Tibet plate boundary and the Mediterranean Sea. Gravity predictions are compared to satellite gravity.
How to cite:
Jeanniot, L., Thieulot, C., Root, B., Naliboff, J., and Spakman, W.: On the accuracy of gravity fields obtained with Newton integrals on a hollow sphere, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17868, https://doi.org/10.5194/egusphere-egu2020-17868, 2020
We investigate the use of cross-validation (CV) techniques to estimate the accuracy of equivalent-source (also known as equivalent-layer) models for interpolation and processing of potential-field data. Our preliminary results indicate that some common CV algorithms (e.g., random permutations and k-folds) tend to overestimate the accuracy. We have found that blocked CV methods, where the data are split along spatial blocks instead of randomly, provide more conservative and realistic accuracy estimates. Beyond evaluating an equivalent-source model's performance, cross-validation can be used to automatically determine configuration parameters, like source depth and amount of regularization, that maximize prediction accuracy and avoid over-fitting.
Widely used in gravity and magnetic data processing, the equivalent-source technique consists of a linear model (usually point sources) used to predict the observed field at arbitrary locations. Upward-continuation, interpolation, gradient calculations, leveling, and reduction-to-the-pole can be performed simultaneously by using the model to make predictions (i.e., forward modelling). Likewise, the use of linear models to make predictions is the backbone of many machine learning (ML) applications. The predictive performance of ML models is usually evaluated through cross-validation, in which the data are split (usually randomly) into a training set and a validation set. Models are fit on the training set and their predictions are evaluated using the validation set using a goodness-of-fit metric, like the mean square error or the R² coefficient of determination. Many cross-validation methods exist in the literature, varying in how the data are split and how this process is repeated. Prior research from the statistical modelling of ecological data suggests that prediction accuracy is usually overestimated by traditional CV methods when the data are spatially auto-correlated. This issue can be mitigated by splitting the data along spatial blocks rather than randomly. We conducted experiments on synthetic gravity data to investigate the use of traditional and blocked CV methods in equivalent-source interpolation. We found that the overestimation problem also occurs and that more conservative accuracy estimates are obtained when applying blocked versions of random permutations and k-fold. Further studies need to be conducted to generalize these findings to upward-continuation, reduction-to-the-pole, and derivative calculation.
Open-source software implementations of the equivalent-source and blocked cross-validation (in progress) methods are available in the Python libraries Harmonica and Verde, which are part of the Fatiando a Terra project (www.fatiando.org).
How to cite:
Uieda, L. and Soler, S.: Evaluating the accuracy of equivalent-source predictions using cross-validation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15729, https://doi.org/10.5194/egusphere-egu2020-15729, 2020
We present a new strategy for gravity and magnetic data interpolation and processing. Our method is based on the equivalent layer technique (EQL) and produces more accurate interpolations when compared with similar EQL methods. It also reduces the computation time and memory requirements, both of which have been severe limiting factors.
The equivalent layer technique (also known as equivalent source, radial basis functions, or Green’s functions interpolation) is used to predict the value of gravity and magnetic fields (or transformations thereof) at any point based on the data gathered on some observation points. It consists in estimating a source distribution that produces the same field as the one measured and using this estimate to predict new values. It generally outperforms other general-purpose 2D interpolators, like the minimum curvature or bi-harmonic splines, because it takes into account the height of measurements and the fact that these fields are harmonic functions. Nevertheless, defining a layout for the source distribution used by the EQL is not trivial and plays an important role in the quality of the predictions.
The most widely used source distributions are: (a) a regular grid of point sources and (b) one point source beneath each observation point. We propose a new source distribution: (c) divide the area into blocks, calculate the average location of observation points inside each block, and place one point source beneath each average location. This produces a smaller number of point sources in comparison with the other source distributions, effectively reducing the computational load. Traditionally, the source points are located: (i) all at the same depth or (ii) each source point at a constant relative depth beneath its corresponding observation point. Besides these two, we also considered (iii) a variable relative depth for each source point proportional to the median distance to its nearest neighbours. The combination of source distributions and depth configurations leads to seven different source layouts (the regular grid is only compatible with the constant depth configuration).
We have scored the performance of each configuration by interpolating synthetic ground and airborne gravity data, and comparing the interpolation against the true values of the model. The block-averaged source layout (c) with variable relative depth (iii) produces more accurate interpolation results (R² of 0.97 versus R² of 0.63 for the traditional grid layout) in less time than the alternatives (from 2 to 10 times faster on our test cases). These results are consistent between ground and airborne survey layouts. Our conclusions can be extrapolated to other applications of equivalent layers, such as upward continuation, reduction-to-the-pole, and derivative calculation. What is more, we expect that these optimizations can benefit similar spatial prediction problems beyond gravity and magnetic data.
The source code developed for this study is based on the EQL implementation available in Harmonica (fatiando.org/harmonica), an open-source Python library for modelling and processing gravity and magnetic data.
How to cite:
Soler, S. R. and Uieda, L.: A better strategy for interpolating gravity and magnetic data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-549, https://doi.org/10.5194/egusphere-egu2020-549, 2019
Studies of the Earth’s interior structure are one of the most complex topics in modern science. Integration of different geophysical methods plays a key role in effectively tackling the problem. In the last decade capabilities of potential field geophysical methods have been increasing due to development of advanced digital technologies. Improved resolution and accuracy of gravity and magnetic fields measurements made by modern equipment makes it possible to build more detailed geological models. Different tectonic and structural elements being interpreted in such models produce potential field signals with different spectral characteristics. Like any geophysical signals, potential fields can be described as a spatially non-stationary signal. This means its frequency content may change depending on a given signal sample, in particular with different spatial location of a sample. In this case, approaches of gravity and magnetic fields analysis based on Fourier transform or signal decomposition into a number of harmonic functions can lead to incorrect results. One of the ways to solve this challenge involves using wavelet transform based algorithms, since these transforms do not assume stationary signals and each function of a wavelet-based basis is localized in space domain.
In gravity and magnetic data analysis it is beneficial to use wavelets based on partial derivatives of the Poisson kernel, which correspond to derivatives of a point source gravity potential. Application of Poisson wavelets in potential field data analysis has begun in the 1990's and is predominantly aimed at studying gravity and magnetic fields singularity points during data interpretation.
Similar to Fourier-based potential field techniques, it is possible to construct a number of data filtering algorithms based on Poisson wavelets. Current work demonstrates that it is possible to construct algorithms based on Poisson wavelets for transforming profile and spatially gridded gravity and magnetic data, e.g. for calculation of equivalent density and magnetization distributions, upward and downward continuations, reduction to pole and many other filters that take into account spatial distribution of the signal.
Wavelet-transforms allow to account for spatially non-stationary nature of geophysical signals. Use of wavelet based techniques allows to effectively carry out potential field data interpretation in a variety of different geologic and tectonic settings in a consistent fashion.
How to cite:
Kuznetsov, K., Andrey, B., and Lygin, I.: Gravity and magnetic data analysis based on Poisson wavelet-transforms, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-872, https://doi.org/10.5194/egusphere-egu2020-872, 2019
Baoliang Lu, Tao Ma, Shengqing Xiong, and Wanyin Wang
The traditional gravity and magnetic correspondence analysis tends to have high correlation outside the field source area. In order to overcome the disadvantage, we propose a new method for identify the source position and attribute, which is based on similarity and vertical derivative of potential field. In this method, we put forward a new gravity and magnetic correlation parameter (GMCP), which can effectively reduce the range of potential field source and indicate the field intensity information. The distribution of the non-zero areas of GMCP reflects the size of the source. GMCP discriminant parameter values of positive and negative reflect the source attribute. When GMCP is greater than zero, it is a positive correlation indicating that there are high-density and high-magnetization or low-density and low-magnetization homologous bodies in this region; When GMCP is less than zero, it is negative correlation indicating that there are high-density and low-magnetic or low-density and high-magnetic density homologous bodies in this region. GMCP goes to zero, which means no gravity-magnetic homologous geological body. Complex models test results with different noise level and actual data processing of South China Sea Basin show the correctness and validity of identification of the proposed methods.
How to cite:
Lu, B., Ma, T., Xiong, S., and Wang, W.: A new source location and attribute recognition method based on correlation analysis of gravity and magnetic anomaly, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1713, https://doi.org/10.5194/egusphere-egu2020-1713, 2019
It is vital to quickly and effectively determine the extent and depth of geological body by using potential field data in gravity and magnetic survey. In this study, three key techniques studying the extent and depth of geological sources based on curvature attribute are studied: the optimal solutions to the objective function, the edge of geological bodies and picking out solutions. Firstly, the optimal solution to the objective function is studied, that is, the key extraction algorithm about the curvature attribute. The Huber norm is introduced into the extraction algorithm of curvature attribute, which more accurately detect the depth of edge of the geological bodies. Secondly, the normalized vertical derivative of the total horizontal derivative (NVDR-THDR) technique is introduced into curvature attribute, which shows more continuous results about the edge position of the geological bodies and more sensitive to the small-scale tectonic structure. Finally, we study the way to pick out the inversion solution, that is, to solve the multi-solution equations in the inversion. The upward continuation of a certain height with strict physical significance was introduced into the inversion method, which was used to suppress the noise, and the final and actual inversion depth was equal to the inversion depth minus the height of upward continuation. And the average value of threshold limitation technology of the potential fields data was also introduced into this method. Using the two technologies, solutions of non-field source edge positions were eliminated, and make the inversion solutions closer to the actual situation. Through the above three key techniques, the accuracy, continuity and recognition to the small-scale structure of the inversion result are optimized. The theoretical models are used to verify the effectiveness of the above key technologies, the results show that the three key technologies have achieved good results, and the combined models are used to verify the effectiveness of the optimized inversion method. The measured aeromagnetic data were used to inversing the edge depth of the intrusive rock in a mining area, and the inversion results are in good agreement with the rock depth revealed by borehole.
How to cite:
Liu, J., Wang, W., and Xiong, S.: The Method of Curvature Attribute applied in the Depth Inversion of the Geological Bodies Edge by Potential Field Data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7034, https://doi.org/10.5194/egusphere-egu2020-7034, 2020
Mohamed Abdrabou, Maha Abdelazeem, and Mohamed Gobashy
Geophysical data such as gravity data can be inverted to get a subsurface image, which depicts the subsurface distribution of physical property. Consequently, inversion of geophysical data has an effective role for interpreting measured geophysical anomalies in hydrocarbons and mineral applications. Interest about ore deposits exploration and sedimentary basins interpretation is associated with their economic importance. The presence of sedimentary basins gives lower amplitude of gravity anomalies with negative signals, due to the negative density contrast as these sedimentary basins have lower density than that of the neighboring basement rocks. In prospecting ore deposits, studying the spatial distributions of densities in the subsurface is essential of significance.Two dimensional forward modelling strategy can be done via locating the rectangular cells with fixed size directly underneath the location of the observed data points using regular grid discretization. Density vector of the subsurface rectangular cells are obtained via solving the 2D gravity inverse problem by optimizing an objective function (i.e., the differences between observed and inverted residual gravity data sets). In this work, a hybrid algorithm merging a bat (BAT) algorithm with the preconditioned conjugate gradient (PCG) method is suggested as a mean for inverting surface gravity anomalies to obtain the density distribution in the subsurface. Like the hybrid, minimization algorithm has the capability to make use of the advantages of both two techniques. In this hybrid algorithm, the BAT algorithm was utilized to construct an initial solution for the PCG technique. The BAT optimizer acts as a rapid build-up of the model, whereas the second modifies the finer model approximated solution. This modern algorithm was firstly applied on a free-noise synthetic data and to a noisy data with three different levels of random noise, and good results obtained through the inversion. The validity and applicability of our algorithm are applied to real residual gravity anomalies across the San Jacinto graben in southern California, USA, and Sierra Mayor - Sierra Pinta graben, USA and prospecting of the Poshi Cu-Ni deposits, Xinjiang, northwest China. The obtained results are in excellent accordance with those produced by researchers in the published literature.
How to cite:
Abdrabou, M., Abdelazeem, M., and Gobashy, M.: A Hybrid PCG- Bat Algorithm for 2D Gravity Inversion: Applications for Ore Deposits Exploration and Interpretation of Sedimentary Basins, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1116, https://doi.org/10.5194/egusphere-egu2020-1116, 2019