A Transformer-Based Model for Effective Representation of Geospatial Data and Context
- 1School of Geography and Earth Science, University of Glasgow, Glasgow, UK (2911670D@student.gla.ac.uk)
- 2Department of Geography, Florida State university, Tallahassee, USA (Ziqi.Li@fsu.edu)
Machine learning (ML) and Artificial Intelligence (AI) models have been increasingly adopted for geospatial tasks. However, geospatial data (such as points and raster cells) are often influenced by underlying spatial effects, and current model designs often lack adequate consideration of these effects. Determining the efficient model structure for representing geospatial data and capturing the underlying complex spatial and contextual effects still needs to be explored. To address this gap, we propose a Transformer-like encoder-decoder architecture to first represent geospatial data with respect to their corresponding geospatial context, and then decode the representation for task-specific inferences. The encoder consists of embedding layers that transform the input location and attributes of geospatial data into meaningful embedding vectors. The decoder comprises task-specific neural network layers that map the encoder outputs to the final output. Spatial contextual effects are measured using explainable artificial intelligence (XAI) methods. We evaluate and compare the performance of our model with other model structures on both synthetic and real-world datasets for spatial regression and interpolation tasks. This work proposes a generalizable approach to better modeling and measuring complex spatial contextual effects, potentially contribute to efficient and reliable urban analytic applications that require geo-context information.
How to cite: Deng, R., Li, Z., and Wang, M.: A Transformer-Based Model for Effective Representation of Geospatial Data and Context, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-1003, https://doi.org/10.5194/egusphere-egu24-1003, 2024.