EGU2020-102
https://doi.org/10.5194/egusphere-egu2020-102
EGU General Assembly 2020
© Author(s) 2020. This work is distributed under
the Creative Commons Attribution 4.0 License.

Extracting field boundaries from satellite imagery with a convolutional neural network to enable smart farming at scale

Franz Waldner1 and Foivos Diakogiannis2
Franz Waldner and Foivos Diakogiannis
  • 1CSIRO Agriculture & Food (franz.waldner@csiro.au)
  • 2CSIRO Data61 (foivos.diakogiannis@data61.csiro.au)

Many of the promises of smart farming centre on assisting farmers to monitor their fields throughout the growing season. Having precise field boundaries has thus become a prerequisite for field-level assessment. When farmers are being signed up by agricultural service providers, they are often asked for precise digital records of their boundaries. Unfortunately, this process remains largely manual, time-consuming and prone to errors which creates disincentives.  There are also increasing applications whereby remote monitoring of crops using earth observation is used for estimating areas of crop planted and yield forecasts. Automating the extraction of field boundaries would facilitate bringing farmers on board, and hence fostering wider adoption of these services, but would also improve products and services to be provided using remote sensing. Several methods to extract field boundaries from satellite imagery have been proposed, but the apparent lack of field boundary data sets seems to indicate low uptake, presumably because of expensive image preprocessing requirements and local, often arbitrary, tuning. Here, we introduce a novel approach with low image preprocessing requirements to extract field boundaries from satellite imagery. It poses the problem as a semantic segmentation problem with three tasks designed to answer the following questions:  1) Does a given pixel belong to a field? 2) Is that pixel part of a field boundary? and 3) What is the distance from that pixel to the closest field boundary? Closed field boundaries and individual fields can then be extracted by combining the answers to these three questions. The tasks are performed with ResUNet-a, a deep convolutional neural network with a fully connected UNet backbone that features dilated convolutions and conditioned inference. First, we characterise the model’s performance at local scale. Using a single composite image from Sentinel-2 over South Africa, the model is highly accurate in mapping field extent, field boundaries, and, consequently, individual fields. Replacing the monthly composite with a single-date image close to the compositing period marginally decreases accuracy. We then show that, without recalibration, ResUNet-a generalises well across resolutions (10 m to 30 m), sensors (Sentinel-2 to Landsat-8), space and time. Averaging model predictions from at least four images well-distributed across the season is the key to coping with the temporal variations of accuracy.  Finally, we apply the lessons learned from the previous experiments to extract field boundaries for the whole of the Australian cropping region. To that aim, we compare three ResUNet-a models which are trained with different data sets: field boundaries from Australia, field boundaries from overseas, and field boundaries from both Australia and overseas (transfer learning).   By minimising image preprocessing requirements and replacing local arbitrary decisions by data-driven ones, our approach is expected to facilitate the adoption of smart farming services and improve land management at scale.

How to cite: Waldner, F. and Diakogiannis, F.: Extracting field boundaries from satellite imagery with a convolutional neural network to enable smart farming at scale, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-102, https://doi.org/10.5194/egusphere-egu2020-102, 2019

Displays

Display file