A robust deep learning-based active fire detection model in diverse environments by fusion of satellite and numerical model data
- Department of Urban and Environmental Engineering, Ulsan National Institute of Science and Technology, Ulsan, Republic of Korea
Due to the irregular and sporadic nature of wildfires, continuous monitoring of large areas is required. Since geostationary satellite sensors can observe large areas with high temporal resolution, they are suitable for monitoring wildfires in real time. However, the threshold algorithm currently employed for satellite-based active fire detection has poor performance in sensors with low spatial resolution. In addition, the algorithm does not account for environmental conditions that affect wildfire detection, resulting in poor generalization performance for large areas. This study examines the viability of an adaptive active fire detection model by combining satellite and numerical model data with deep learning. A model for active fire detection was developed using commonly employed brightness temperature-related variables (key variables) and local environmental variables (sub variables). Key variables are the cross spectral and spatial differences between the MIR (central wavelength of 3.85 m) and 2 TIR (central wavelengths of 9.63 and 11.20 m) channels of the Advanced Himawari Imager (AHI). Sub variables include Solar zenith angle (SOZ) and satellite zenith angle (SAZ) of AHI, skin temperature (ST) and relative humidity (RH) of European Centre for Medium-Range Weather Forecasts (ECMWF) Reanalysis v5 (ERA5)-land data. Four processes (confidence, frequency, land cover, and continuity tests) were used to extract reference fire samples from Moderate Resolution Imaging Spectroradiometer (MODIS) and Visible Infrared Imaging Radiometer Suite (VIIRS) active fire products. To consider the different properties of key and sub variables, a 2-way convolutional neural network (CNN) structure was developed. To evaluate the influence of environmental variables, a CNN model without sub variables was adopted as a control model. The 2-way CNN (recall of 0.86, precision of 0.96, and standard deviation of recall of 0.13) was more robust at five focus sites than the control CNN (recall of 0.82, precision of 0.97, and standard deviation of recall of 0.163). Despite having a lower spatial resolution than MODIS/VIIRS, 2-way CNN outperformed other satellite-based active fire products (MODIS, VIIRS, AHI, and Advanced Meteorological Imager) in terms of detection capacity. The control CNN demonstrated poor performance under certain environmental conditions (high RH, high SAZ, and transition time between day and night), but 2-way CNN mitigates this tendency. In particular, the use of RH improved detection sensitivity, and SAZ contributed to the spatial robustness. This study demonstrated the significance of environmental conditions in active fire detection and proposed a suitable CNN structure for this intent. Based on the findings of this study, higher-level adaptive active fire monitoring under diverse environmental conditions will be possible together with explainable artificial intelligence.
How to cite: Sung, T., Kang, Y., and Im, J.: A robust deep learning-based active fire detection model in diverse environments by fusion of satellite and numerical model data, EGU General Assembly 2023, Vienna, Austria, 24–28 Apr 2023, EGU23-4969, https://doi.org/10.5194/egusphere-egu23-4969, 2023.