EGU24-10647, updated on 08 Mar 2024
https://doi.org/10.5194/egusphere-egu24-10647
EGU General Assembly 2024
© Author(s) 2024. This work is distributed under
the Creative Commons Attribution 4.0 License.

Enhancement of Land Use and Land Cover Mapping using Satellite Imagery through Machine Learning Techniques 

Achala Shakya
Achala Shakya
  • University of Petroleum and Energy Studies, School of Computer Science, Artificial Intelligence, Dehradun, India (shakyaachala@gmail.com)

With the advent of a large number of available satellite imagery for classification, several classification techniques have been developed to classify these images over the years. Each classification method has its own set of challenges that restrict its use. Hence, a novel research method is proposed using hybrid techniques (Machine Learning and Deep Learning) and geospatial techniques for enhancing land use and land cover mapping using satellite images. Furthermore, this study will provide insights into integrated agricultural management to achieve the UN’s Sustainable Development goals. This study uses freely available satellite images, i.e., Sentinel 1 and Sentinel 2, for classifying land use/ land cover over an agricultural area in Hissar, India. The major crops grown in this area include paddy, maize, cotton, and pulses during Kharif (summer) and wheat, sugarcane, mustard, gram, and peas during Rabi (winter) seasons. The datasets for the study area were pre-processed using SNAP and ArcGIS software. After pre-processing, a comprehensive feature set is identified consisting of Polarimetric features such as elements of covariance matrix, Entropy/scattering angle (alpha), and traditional geometric features such as shape, size, and area. After this phase, dimensionality reduction techniques (such as PCA) were applied to reduce the no. of features to the most important ones. Utilizing the feature set, a hybrid machine learning model is constructed using ground truth images by fine-tuning the model. For the best split, the test and train ratio is divided into 70:30. Optical (Sentinel 1) and Microwave (Sentinel 2) datasets are fused and the quality of the fused image was evaluated using several fusion metrics, including Erreur Relative Globale Adimensionnelle de Synthese (ERGAS), Spectral Angle Mapper (SAM), Relative Average Spectral Error (RASE), Universal Image Quality Index (UIQI), Structural Similarity Index (SSIM), Peak Signal-to-Noise Ratio (PSNR), and Correlation Coefficient (CC). After obtaining the resultant fused image hybrid technique is used for the final classification. The classification accuracy is represented using overall classification accuracy and kappa value. A comparison of classification results indicates a better performance by the hybrid technique with an overall accuracy of 92%.

How to cite: Shakya, A.: Enhancement of Land Use and Land Cover Mapping using Satellite Imagery through Machine Learning Techniques , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-10647, https://doi.org/10.5194/egusphere-egu24-10647, 2024.