Understanding Earth’s system natural processes, especially in the context of global climate change, has been recognised globally as a very urgent and central research direction which need further exploration. With the launch of new satellite platforms with a high revisit time, combined with the increasing capability for collecting repetitive ultra-high aerial images, through unmade aerial vehicles, the scientific community have new opportunities for developing and applying new image processing algorithms to solve old and new environmental issues.
The purpose of the proposed session is to gather scientific researchers related to this topic aiming to highlight ongoing researches and new applications in the field of satellite and aerial time-series imagery. The session focus is on presenting studies aimed at the development or exploitation of novel satellite times series processing algorithms, and applications to different types of remote sensing data for investigating longtime processes in all branches of Earth (sea, ice, land, atmosphere).
The conveners encourage both applied and theoretical research contributions focusing in novel methods and applications of satellite and aerial time-series imagery all disciplines of geosciences, including both aerial and satellite platforms and data acquired in all regions of the electromagnetic spectrum.
vPICO presentations: Mon, 26 Apr
An breadboard for end-to-end (E2E) Marine Litter Optical Performance Simulations (ML-OPSI) is being designed in the frame of the ESA Open Space Innovation Platform (OSIP) Campaign to support Earth Observation (EO) scientists with the design of computational experiments for Operations Research. The ML-OPSI breadboard will estimate Marine Litter signal at Top-Of-Atmosphere (TOA) from a set of Bottom-Of-Atmosphere (BOA) scenarios representing the various case studies by the community (e.g., windrows, frontal areas, river mouths, sub-tropical gyres), coming from synthetic data (computer-simulated) or from real observations. It is a modular, pluggable and extensible framework, promoting re-use and be adapted for different missions, sensors and scenarios.
The breadboard consists of (a) the OPSI components for the simulation i.e. the process of using a model to study the characteristics of the system by manipulating variables and by studying the properties of the model allowing an evaluation to optimise performance and make predictions about the real system; and (b) the Marine Litter model components for the detection of marine litter. It shall consider the changes caused in the water reflectance and properties due to marine litter, exploiting gathered information of plastic polymers, different viewing geometries, and atmospheric conditions as naturally occurring. The modules of the breadboard include a Scenario Builder Module (SB) with maximum spatial resolution and best modelling as possible of the relevant physical properties, which for spectral sensors could include high spatial resolution and high spectral density/resolution BOA radiance simulations in the optical to SWIR bands; a Radiative Transfer Module (RTM) transforming water-leaving to TOA reflectance for varying atmospheric conditions and observational geometries; a Scene Generator Module (SGM) which could use Sentinel-2, Landsat, or PRISMA data as reference or any other instrument as pertinent; a Performance Assessment Module (PAM) for ML detection that takes into account the variability of the atmosphere, the sunlight & skylight at BOA, the sea-surface roughness with trains of wind waves & swells, sea-spray (whitecaps), air bubbles in the mixed layer, marine litter dynamics as well as instrumental noise to assess marine litter detection feasibility.
Marine Litter scenarios of reference shall be built based on in-situ campaigns, to reflect the true littering conditions at each case, both in spatial distribution and composition. The breadboard shall be validated over artificial targets at sea in field campaigns as relevant. This might include spectral measurements from ASD, on-field radiometers, and cameras on UAVs, concomitant with Copernicus Sentinel-2 acquisitions. Combined, they can be used to estimate atmospheric contribution and assess performance of the testes processing chain.
This activity collaborates on the ““Remote Sensing of Marine Litter and Debris” IOCCG taskforce.
How to cite: Emsley, S., Arias, M., Papadopoulou, T., and Martin-Lauzer, F.-R.: Simulating Marine Litter observations from space to support Operations Research, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-15661, https://doi.org/10.5194/egusphere-egu21-15661, 2021.
Earth Observation (EO) makes it possible to obtain information on key parameters characterizing interactions among Earth’s system components, such as evaporative fraction (EF) and surface soil moisture (SSM). Notably, techniques utilizing EO data of land surface temperature (Ts) and vegetation index (VI) have shown promise in this regard. The present study presents an implementation of a downscaling method that combined the soil moisture product from SMOS and the Fractional Vegetation Cover provided by Sentinel 3 ESA platform.
The applicability of the investigated technique is demonstrated for a period of two years (2017-2018) using in-situ data acquired from five CarboEurope sites and from all the sites available in the REMEDHUS soil moisture monitoring network, representing a variety of climatic, topographic and environmental conditions. Predicted parameters were compared against co-orbital ground measurements acquired from several European sites belonging to the CarboEurope ground observational network.
Results indicated a close agreement between all the inverted parameters and the corresponding in-situ data. SSM maps predicted from the “triangle” SSM showed a small bias, but a large scatter. The results of this study provide strong supportive evidence of the potential value of the investigated herein methodology in accurately deriving estimates of key parameters characterising land surface interactions that can meet the needs of fine-scale hydrological applications. Moreover, the applicability of the presented approach demonstrates the added value of the synergy between ESA’s operational products acquired from different satellite sensors, namely in this case SMOS & Sentienl-3. As it is not tight to any particular sensor can also be implemented with technologically advanced EO sensors launched recently or planned to be launched.
In the present work Dr Petropoulos participation has received funding from the European Union’s Horizon 2020 research and innovation programme ENViSIoN under the Marie Skłodowska-Curie grant agreement No 752094.
How to cite: Piles, M., Pablos Hernandez, M., Vall-llossera, M., Portal, G., Sandric, I., Petropoulos, G. P., and Hristopulos, D.: Synergistic use of SMOS and Sentinel-3 for retrieving spatiotemporally estimates of surface soil moisture and evaporative fraction, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-16478, https://doi.org/10.5194/egusphere-egu21-16478, 2021.
Automatically extracting buildings from remote sensing images (RSI) plays important roles in urban planning, population estimation, disaster emergency response, etc. With the development of deep learning technology, convolutional neural networks (CNN) with better performance than traditional methods have been widely used in extracting buildings from remote sensing imagery (RSI). But it still faces some problems. First of all, low-level features extracted by shallow layers and abstract features extracted by deep layers of the artificial neural network could not be fully fused. it makes building extraction is often inaccurate, especially for buildings with complex structures, irregular shapes and small sizes. Secondly, there are so many parameters that need to be trained in a network, which occupies a lot of computing resources and consumes a lot of time in the training process. By analyzing the structure of the CNN, we found that abstract features extracted by deep layers with low geospatial resolution contain more semantic information. These abstract features are conducive to determine the category of pixels while not sensitive to the boundaries of the buildings. We found the stride of the convolution kernel and pooling operation reduced the geospatial resolution of feature maps, so, this paper proposed a simple and effective strategy—reduce the stride of convolution kernel contains in one of the layers and reduced the number of convolutional kernels to alleviate the above two bottlenecks. This strategy was used to deeplabv3+net and the experimental results for both the WHU Building Dataset and Massachusetts Building Dataset. Compared with the original deeplabv3+net the result showed that this strategy has a better performance. In terms of WHU building data set, the Intersection over Union (IoU) increased by 1.4% and F1 score increased by 0.9%; in terms of Massachusetts Building Dataset, IoU increased by 3.31% and F1 score increased by 2.3%.
How to cite: Chen, M., Wu, J., and Tian, F.: Reducing the stride of the convolution kernel: a simple and effective strategy to increase the performance of CNN in building extraction from remote sensing image, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-10783, https://doi.org/10.5194/egusphere-egu21-10783, 2021.
Deep learning has a good capacity of hierarchical feature learning from unlabeled remote sensing images. In this study, the simple linear iterative clustering (SLIC) method was improved to segment the image into good quality super-pixels. Then, we used the convolutional neural network (CNN) to extract of water bodies from Sentinel-2 MSI data using deep learning technique. In the proposed framework, the improved SLIC method obtained the correct water bodies boundary by optimizing the initial clustering center, designing a dynamic distance measure, and expanding the search space. In addition, it is different from traditional extraction of water bodies methods that cannot achieve multi-level water bodies detection. Experimental results showed that this method had higher detection accuracy and robustness than other methods. This study was able to extract water bodies from remotely sensed images with deep learning and to conduct accuracy assessment.
How to cite: Chou, S.: Deep learning for extracting water body from Sentinel-2 MSI imagery, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-3740, https://doi.org/10.5194/egusphere-egu21-3740, 2021.
The Normalized Difference Vegetation Index (NDVI) data provided by the satellite Landsat have rich historical archive data with a spatial resolution of 30 m. However, the Landsat NDVI time-series data are quite discontinuous due to its 16-day revisit cycle, cloud contamination and some other factors. The spatiotemporal data fusion technology has been proposed to reconstruct continuous Landsat NDVI time-series data by blending the MODIS data with the Landsat data. Although a number of spatiotemporal fusion algorithms have been developed during the past decade, most of the existing algorithms usually ignore the effective use of partially cloud-contaminated images. In this study, we presented a new spatiotemporal fusion method, which employed the cloud-free pixels in the partially cloud-contaminated images to improve the performance of MODIS-Landsat data fusion by Correcting the inconsistency between MODIS and Landsat data in Spatiotemporal DAta Fusion (called CSDAF). We tested the new method at three sites covered by different vegetation types, including deciduous forests in the Shennongjia Forestry District of China (SNJ), evergreen forests in Southeast Asia (SEA), and the irrigated farmland in the Coleambally irrigated area of Australia (CIA). Two experiments were designed. In experiment I, we first simulated different cloud coverages in cloud-free Landsat images and then used both CSDAF and the recently developed IFSDAF method to restore these “missing” pixels for quantitative assessments. Results showed that CSDAF performed better than IFSDAF by achieving the smaller average Root Mean Square Error (RMSE) values (0.0767 vs. 0.1116) and the larger average Structural SIMilarity index (SSIM) values (0.8169 vs. 0.7180). In experiment II, we simulated the scenario of “inconsistence” between MODIS and Landsat by simulating different levels of noise on MODIS and Landsat data. Results showed that CSDAF was able to reduce the influence of the inconsistence between MODIS and Landsat data on MODIS-Landsat data fusion to some extent. Moreover, CSDAF is simple and can be implemented on the Google Earth Engine. We expect that CSDAF is potentially to be used to reconstruct Landsat NDVI time-series data at the regional and continental scales.
How to cite: Ling, X. and Cao, R.: A new MODIS-Landsat fusion method to reconstruct Landsat NDVI time-series data, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-14869, https://doi.org/10.5194/egusphere-egu21-14869, 2021.
Urban green infrastructure has various benefits known as ecosystem services such as regulating, cultural, provisioning and supporting services. Among the provided benefits there are decrease of air temperature, increasing humidity and mitigating urban heat island as regulating services; human-nature relations as cultural services; improving air quality, carbon sequestration as provisioning services and photosynthesis, nutrient and water cycling as supporting services. The high intensity of the urbanization process across the last decades coupled with weak legislative frameworks resulted both in large areas affected by urban sprawl and densification of the existing urban fabric. Both phenomenon generated loss in open spaces, especially green areas. In the context of the sustainable urbanization promoted by HABITAT Agenda, the knowledge related with the distribution, size and quality of urban green areas represents a priority. The study aim is to identify small urban green areas at local level at different time moments for a dynamic evaluation. We focused on small urban green areas since they are scarcely analysed even if their importance for the urban quality of life Is continuously increasing given the urbanization process. We used satellite imagery acquired by Planet Satellite Constellations, with a spatial resolution of 3.7 m and daily coverage, for extracting green areas. The images were processed using Geographic Object-Based Image Analysis (OBIA) techniques implemented in Esri ArcGIS Pro. The spatial analysis we performed generated information about distribution, surfaces, quality (based on NDVI) and dynamic of small urban green areas. The results are connected with the local level development of the urban areas we analysed, but also with the population consumption pattern for leisure services, housing, transport or other public utilities. The analysis can represent a complementary method for extracting green areas at urban level and can support the data collection for calculating urban sustainability indicators.
How to cite: Popa, A.-M., Onose, D. A., Sandric, I. C., Gradinaru, S. R., and Gavrilidis, A. A.: Dynamic evaluation of small urban green areas at local level using GEOBIA, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-10374, https://doi.org/10.5194/egusphere-egu21-10374, 2021.
Long time series of essential climate variables (ECVs) derived from satellite data are key to climate research. SemantiX is a research project to establish, complement and expand Advanced Very High Resolution Radiometer (AVHRR) time series using Copernicus Sentinel-3 A/B imagery, making them and derived ECVs accessible using a semantic Earth observation (EO) data cube. The Remote Sensing Research Group at the University of Bern has one of the longest European times series of AVHRR imagery (1981-now). Data cube technologies are a game changer for how EO imagery are stored, accessed, and processed. They also establish reproducible analytical environments for queries and information production and are able to better represent multi-dimensional systems. A semantic EO data cube is a newly coined concept by researchers at the University of Salzburg referring to a spatio-temporal data cube containing EO data, where for each observation at least one nominal (i.e., categorical) interpretation is available and can be queried in the same instance (Augustin et al. 2019). Offering analysis ready data (i.e., calibrated and orthorectified AVHRR Level 1c data) in a data cube along with semantic enrichment reduces barriers to conducting spatial analysis through time based on user-defined AOIs.
This contribution presents a semantic EO data cube containing selected ECV time series (i.e., snow cover extent, lake surface water temperature, vegetation dynamics) derived from AVHRR imagery (1981-2019), a temporal and spatial subset of AVHRR Level 1c imagery (updated after Hüsler et al. 2011) from 2016 until 2019, and, for the later, semantic enrichment derived using the Satellite Image Automatic Mapper (SIAM). SIAM applies a fully automated, spectral rule-based routine based on a physical-model to assign spectral profiles to colour names with known semantic associations; no user parameters are required, and the result is application-independent (Baraldi et al. 2010). Existing probabilistic cloud masks (Musial et al. 2014) generated by the Remote Sensing Research Group at the University of Bern are also included as additional data-derived information to support spatio-temporal semantic queries. This implementation is a foundational step towards the overall objective of combining climate-relevant AVHRR time series with Sentinel-3 imagery for the Austrian-Swiss alpine region, a European region that is currently experiencing serious changes due to climate change that will continue to create challenges well into the future.
Going forward, this semantic EO data cube will be linked to a mobile citizen science smartphone application. For the first time, scientists in disciplines unrelated to remote sensing, students, as well as interested members of the public will have direct and location-based access to these long EO data time series and derived information. SemantiX runs from August 2020-2022 funded by the Austrian Research Promotion Agency (FFG) under the Austrian Space Applications Programme (ASAP 16) (project #878939) in collaboration with the Swiss Space Office (SSO).
How to cite: Augustin, H., Sudmanns, M., Weber, H., Baraldi, A., Wunderle, S., Neuhaus, C., Reichel, S., van der Meer, L., Hummer, P., and Tiede, D.: SemantiX: a cross-sensor semantic EO data cube to open and leverage AVHRR time-series and essential climate variables with scientists and the public , EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-12722, https://doi.org/10.5194/egusphere-egu21-12722, 2021.
Nitrogen dioxide (NO2) is one of the main air quality pollutants of concern in many urban and industrial areas worldwide. Being emitted by fossil fuel burning activities including mainly road traffic, the NO2 pollution is responsible for population health degradation and secondary pollutants formation as nitric acid and ozone. In the European region, almost 20 countries exceeded in 2017 the NO2 annual limit values imposed by European Commission Directive 2008/50/EC (EEA, 2019). Therefore, NO2 pollution monitoring and regulation is a necessary task to help decision makers to search for a sustainable solution for environmental quality and population health status improvement. In this study, we propose a comparative analysis of the tropospheric NO2 column density spatial configuration over Europe between similar periods from 2019 and 2020, based on ESA Copernicus Sentinel-5P products. Our results highlight the NO2 pollution dynamics over the abrupt transition from a normal condition situation to the COVID-19 outbreak context, characterized by short-time decrease of traffic intensities and industrial activities, this situation being also reflected by the national level statistics referring to COVID-19 cases and economic indicatiors. The validation approach provides high correlation between TROPOMI derived data and independent data from ground-based observations with encouraging values of the R2 ranging between 0.5 and 0.75 in different locations.
How to cite: Vîrghileanu, M., Săvulescu, I., Mihai, B.-A., Nistor, C., and Dobre, R.: Using Sentinel-5P time-series products for Nitrogen Dioxide (NO2) Spatio-Temporal Analysis over Europe During the Coronavirus Pandemic Lockdown, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-10882, https://doi.org/10.5194/egusphere-egu21-10882, 2021.
Cloud contamination is a serious obstacle for the application of Landsat data. Thick clouds can completely block land surface information and lead to missing values. The reconstruction of missing values in a Landsat cloud image requires the cloud and cloud shadow mask. In this study, we raised the issue that the quality of the quality assessment (QA) band in current Landsat products cannot meet the requirement of thick-cloud removal. To address this issue, we developed a new method (called Auto-PCP) to preprocess the original QA band, with the ultimate objective to improve the performance of cloud removal on Landsat cloud images. We tested the new method at four test sites and compared cloud-removed images generated by using three different QA bands, including the original QA band, the modified QA band by a dilation of two pixels around cloud and cloud shadow edges, and the QA band processed by Auto-PCP (“QA_Auto-PCP”). Experimental results, from both actual and simulated Landsat cloud images, show that QA_Auto-PCP achieved the best visual assessment for the cloud-removed images, and had the smallest RMSE values and the largest Structure SIMilarity index (SSIM) values. The improvement for the performance of cloud removal by QA_Auto-PCP is because the new method substantially decreases omission errors of clouds and shadows in the original QA band, but meanwhile does not increase commission errors. Moreover, Auto-PCP is easy to implement and uses the same data as cloud removal without additional image collections. We expect that Auto-PCP can further popularize cloud removal and advance the application of Landsat data.
Keywords: Cloud detection, Cloud shadows, Cloud simulation, Cloud removal, MODTRAN
How to cite: Yang, B., Feng, Y., and Cao, R.: Improving the Quality Assessment band in Landsat cloud images for the application of cloud removal , EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-14320, https://doi.org/10.5194/egusphere-egu21-14320, 2021.
The climate is strongly affected by interaction with clouds. To reduce major errors in climate predictions, this interaction requires a much finer understanding of cloud physics than current knowledge. Current knowledge is based on empirical remote sensing data that is analyzed under the assumption that the atmosphere and clouds are made of very broad and uniform layers. To help to overcome this problem, 3D scattering computed tomography (CT) has been suggested as a way to study clouds.
CT is a powerful way to recover the inner structure of three dimensional (3D) volumetric heterogeneous objects. CT has extensive use in many research and operational domains. Aside from its common usage in medicine, CT is used for sensing geophysical terrestrial structures, atmospheric pollution and fluid dynamics. CT requires imaging from multiple directions and in nearly all CT approaches, the object is considered static during image acquisition. However, in many cases, the object changes while multi-view images are acquired sequentially. Thus, an effort has been invested to expand 3D CT to four-dimensional (4D) spatiotemporal CT. This effort has been directed at linear CT modalities. Since linear CT is computationally easier to handle, it has been a popular method for medical imaging. However, these linear CT modalities do not apply to clouds: clouds constitute a scattering medium, and therefore radiative transfer is non-linear in the clouds’ content.
This work focuses on the challenge of 4D scattering CT of clouds. Scattering CT of clouds requires high-resolution multi-view images from space. There are spaceborne and high-altitude systems that may provide such data, for example AirMSPI, MAIA, HARP and AirHARP. An additional planned system is the CloudCT formation, funded by the ERC. However, these systems are costly. Deploying them in large numbers to simultaneously acquire images of the same clouds from many angles can be impractical. Therefore, the platforms are planned to move above the clouds: a sequence of images is taken, in order to span and sample a wide angular breadth. However, the clouds evolve while the angular span is sampled.
We pose conditions under which this task can be performed. These regard temporal sampling and angular breadth, in relation to the correlation time of the evolving cloud. Then, we generalize scattering CT. The generalization seeks spatiotemporal recovery of the cloud extinction field in high resolution (10m), using data taken by a small number of moving cameras. We present an optimization-based method to reach this, and then demonstrate the method both in rigorous simulations and on real data.
How to cite: Ronen, R., Schechner, Y. Y., and Eytan, E.: Spatiotemporal tomography based on scattered multiangular signals and its use for resolving evolving clouds using moving platforms, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-10892, https://doi.org/10.5194/egusphere-egu21-10892, 2021.
Urbanization and the trend of people moving to cities often leads to problematic traffic conditions, which can be very challenging for traffic management. It can hamper the flow of people and goods, negatively affecting businesses through delays and the inability to estimate travel times and thus plan, as well as the environment and health of population due to increased fuel consumption and subsequent air pollution. Many cities have a policy and rules to manage traffic, ranging from standard traffic lights to more dynamic and adaptable solutions involving in-road sensors or cameras to actively modify the duration of traffic lights, or even more sophisticated IoT solutions to monitor and manage the conditions on a city-wide scale. The core to these technologies and to decision making processes is the availability of reliable data on traffic conditions, and better yet real-time data. Thus, a lot of cities are still coping with the lack of good spatial and temporal data coverage, as many of these solutions are requiring not only changes to the infrastructure, but also large investments.
One approach is to exploit the current and the forthcoming advancements made available by Earth Observation (EO) satellite technologies. The biggest advantage is EOs great spatial coverage ranging from a few km² to 100 km² per image on a spatial resolution down to 0.3m, thus allowing for a quick, city-spanning data collection. Furthermore, the availability of imaging sensors covering specific bands allows the constituent information within an image to be separated and the information to be leveraged.
In this respect, we present the findings of our work on multispectral image sets collected on three occasions in 2019 using very high resolution WorldView-3 satellite. We apply a combination of machine learning and PCA methods to detect vehicles and devise their kinematic properties (e.g., movement, direction, speed), only possible with satellites with a specific design allowing for short time lags between imaging in different spectral bands. As these data basically constitute a time-series, we will discuss how the results presented fully apply to the forthcoming WorldView-Legion constellation of satellites providing up to 15 revisits per day, and thus near-real time traffic monitoring and its impact on the environment.
How to cite: Duro, R., Neubauer, G., and Bojor, A.-I.: The potential of monitoring traffic conditions up to 15 times a day using sub-meter resolution EO images, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-12831, https://doi.org/10.5194/egusphere-egu21-12831, 2021.
Processing, handling and visualising the large data volume produced by satellite altimetry missions is a challenging task. A reference tool for the visualisation of satellite laser altimetry data is the OpenAltimetry platform, a tool that provides altimetry-specific data from the Ice, Cloud, and land Elevation Satellite (ICESat) and ICESat-2 satellite missions through a web-based interactive interface. However, by focusing only on altimetry data, that tool leaves out access to many other equally important information existing in the data products from both missions.
The main objective of the work reported here was the development of a new web-based tool, called ICEComb, that offers end users the ability to access all the available data from both satellite missions, visualise and interact with them on a geographic map, store the data records locally, and process and explore data in an efficient, detailed and meaningful way, thus providing an easy-to-use software environment for satellite laser altimetry data analysis and interpretation.
The proposed tool is intended to be mainly used by researchers and scientists to aid their work using ICESat and ICESat-2 data, offering users a ready-to-use system to rapidly access the raw collected data in a visually engaging way, without the need to have prior understanding of the format, structure and parameters of the data products. In addition, the architecture of the ICEComb tool was developed with possible future expansion in mind, for which well-documented and standard languages were used in its implementation. This allows, e.g., to extend its applicability to data from other satellite laser altimetry missions and integrate models that can be coupled with ICESat and ICESat-2 data, thus expanding and enriching the context of studies carried out with such data.
The use of the ICEComb tool is illustrated and demonstrated by its application to ICESat/GLAS measurements over Lake Mai-Ndombe, a large and shallow freshwater lake located within the Ngiri-Tumba-Maindombe area, one of the largest Ramsar wetlands of international importance, situated in the Cuvette Centrale region of the Congo Basin.
Keywords: Laser altimetry, ICESat/GLAS, software tool design, data visualization, Congo Basin.
Acknowledgement. This work was partially supported by the Portuguese Foundation for Science and Technology (FCT) through LARSyS − FCT Pluriannual funding 2020−2023.
How to cite: Silva, B., Guerreiro Lopes, L., and Campos, P.: ICEComb − A New Software Tool for Satellite Laser Altimetry Data Processing and Visualisation, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-13727, https://doi.org/10.5194/egusphere-egu21-13727, 2021.
Satellite based flood detection can enhance understanding of risk to humans and infrastructures, geomorphic processes, and ecological effects. Such application of optical satellite imagery has been mostly limited to the detection of water exposed to sky, as plant canopies tend to obstruct water visibility in short electromagnetic wavelengths. This case study evaluates the utility in multi-temporal thermal infrared observations from Landsat 8 as a basis for detecting sub-canopy fluvial inundation resulting in ambient temperature change.
We selected three flood events of 2016 and 2019 along sections of the Mississippi, Cedar, and Wapsipinicon Rivers located in Iowa, Minnesota, and Wisconsin, United States. Classification of sub-canopy water involved logical, threshold-exceedance criteria to capture thermal decline within channel-adjacent vegetated zones. Open water extent in the floods was mapped based on short-wave infrared thresholds determined parametrically from baseline (non-flooded) observations. Map accuracy was evaluated using higher-resolution (0.5–5.0 m) synchronic optical imagery.
Results demonstrate improved ability to detect sub-canopy inundation when thermal infrared change is incorporated: sub-canopy flood class accuracy was comparable to that of open water in previous studies. The multi-temporal open-water mapping technique yielded high accuracy as compared to similar studies. This research highlights the utility of Landsat thermal infrared data for monitoring riparian inundation and for validating other remotely sensed and simulated flood maps.
How to cite: Storey, E., Krajewski, W., and Nikolopoulos, E.: Landsat thermal infrared to detect sub-canopy riparian flooding, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-13762, https://doi.org/10.5194/egusphere-egu21-13762, 2021.
The use of remote sensing in mineral detection and lithological mapping has become a generally accepted augmentative tool in exploration. With the advent of multispectral sensors (e.g. ASTER, Landsat, Sentinel and PlanetScope) having suitable wavelength coverage and bands in the Shortwave Infrared (SWIR) and Thermal Infrared (TIR) regions, multispectral sensors have become increasingly efficient at routine lithological discrimination and mineral potential mapping. It is with this paradigm in mind that this project sought to evaluate and discuss the detection and mapping of vanadium bearing magnetite, found in discordant bodies and magnetite layers, on the Eastern Limb of the Bushveld Complex. The Bushveld Complex hosts the world’s largest resource of high-grade primary vanadium in magnetitite layers, so the wide distribution of magnetite, its economic importance, and its potential as an indicator of many important geological processes warranted the delineation of magnetite.
The detection and mapping of the vanadium bearing magnetite was evaluated using specialized traditional, and advanced machine learning algorithms. Prior to this study, few studies had looked at the detection and exploration of magnetite using remote sensing, despite remote sensing tools having been regularly applied to diverse aspects of geosciences. Maximum Likelihood, Minimum Distance to Means, Artificial Neural Networks, Support Vector Machine classification algorithms were assessed for their respective ability to detect and map magnetite using the PlanetScope data in ENVI, QGIS, and Python. For each classification algorithm, a thematic landcover map was attained and the accuracy assessed using an error matrix, depicting the user's and producer's accuracies, as well as kappa statistics.
The Maximum Likelihood Classifier significantly outperformed the other techniques, achieving an overall classification accuracy of 84.58% and an overall kappa value of 0.79. Magnetite was accurately discriminated from the other thematic landcover classes with a user’s accuracy of 76.41% and a producer’s accuracy of 88.66%. The erroneous classification of some mining activity pixels as magnetite in the Maximum Likelihood was inherent to all classification algorithms. The overall results of this study illustrated that remote sensing techniques are effective instruments for geological mapping and mineral investigation, especially in iron oxide mineralization in the Eastern Limb of Bushveld Complex.
How to cite: Twala, M., Roberts, J., and Munghemezulu, C.: Use of multispectral remote sensing data to map magnetite bodies in the Bushveld Complex, South Africa: a case study of Roossenekal, Limpopo., EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-7932, https://doi.org/10.5194/egusphere-egu21-7932, 2021.
Lineament mapping is an important step for lithological and hydrothermal alterations mapping. It is considered as an efficient research task which can be a part of structural investigation and mineral ore deposits identification. The availability of optical as well as radar remote sensing data, such as Landsat 8 OLI, Terra ASTER and ALOS PALSAR data, allows lineaments mapping at regional and national scale. The accuracy of the obtained results depends strongly on the spatial and spectral resolution of the data. The aim of this study was to compare Landsat 8 OLI, Terra ASTER, and radar ALOS PALSAR satellite data for automatic and manual lineaments extraction. The module Line of PCI Geomatica software was applied on PC1 OLI, PC3 ASTER and HH and HV polarization images to automatically extract geological lineaments. However, the manual extraction was achieved using the RGB color composite of the directional filtered images N - S (0°), NE - SW (45°) and E - W (90°) of the OLI panchromatic band 8. The obtained lineaments from automatic and manual extraction were compared against the faults and photo-geological lineaments digitized from the existing geological map of the study area. The extracted lineaments from PC1 OLI and ALOS PALSAR polarizations images showed the best correlation with faults and photo-geological lineaments. The results indicate that the lineaments extracted from HH and HV polarizations of ALOS PALSAR radar data used in this study, with 1499 and 1507 extracted lineaments, were more efficient for structural lineament mapping, as well as the PC1 OLI image with 1057 lineaments.
Keywords Remote Sensing . OLI. ALOS PALSAR . ASTER . Kerdous Inlier . Anti Atlas
How to cite: Jellouli, A., El Harti, A., Adiri, Z., Chakouri, M., El Hachimi, J., and Bachaoui, E. M.: Application of optical and radar data for lineaments mapping in Kerdous inlier of the Anti Atlas belt, Morocco, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-7895, https://doi.org/10.5194/egusphere-egu21-7895, 2021.
Fragmented crop land and marginal landholdings play an important role to classify the landuse and adopt different cropping and management practices. Here the implementation crop classification algorithms are very much difficult and produce results with lower accuracy. Static imagery captured in the optical bands are often contaminated with cloud cover and fail to detect the phenological as well as the structural changes happening during the crop growth. This is very common and most typical in Indian climatic condition. Here, during monsoon period capturing temporal satellite images of the crop periods is a very challenging task. Therefore, the present study aims at application of a novel crop classification algorithm that utilizes the temporal patterns of synthetic aperture radar (SAR) datasets from Sentinel-1 in mapping of landuse of an agriculture system, that is fragmented, small and heterogeneous in nature. Here we used different polarization of Sentinel-1 datasets and developed the temporal crop patterns of different crops grown in semi-arid region of India. Further, an advanced classification algorithm such as time weighted dynamic time wrapping (TWDTW) is employed to classify the cropland with a higher accuracy. Pixel based image analysis was carried out and tested their applicability for cropland mapping. In-situ data sets are collected from the study site to validate the exhibited results from classification outputs. The overall accuracy of the pixel based TWDTW method performed very good results with accuracy of 63 %. The Kappa coefficient is found to be 0.58. The findings confirmed that the pixel based TWDTW algorithm has the potential to delineate the croplands, which were subjected to varying irrigation treatments and management practices, using sentinel-1 datasets.
Keywords: crop classification, landuse, image analysis, Sentinel-1, TWDTW
How to cite: Moharana, Dr. S., Kambhammettu, Dr. B., Chintala, Mr. S., Sandhya Rani, Ms. A., and Avtar, Dr. R.: Improving the Classification Accuracy of Fragmented Cropland by using an Advanced Classification Algorithm, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-6438, https://doi.org/10.5194/egusphere-egu21-6438, 2021.
We are sorry, but presentations are only available for users who registered for the conference. Thank you.
We are sorry, but presentations are only available for users who registered for the conference. Thank you.