EGU23-17437, updated on 26 Feb 2023
https://doi.org/10.5194/egusphere-egu23-17437
EGU General Assembly 2023
© Author(s) 2023. This work is distributed under
the Creative Commons Attribution 4.0 License.

Deep Georeferencing of WWII Aerial Reconnaissance Images

Wilfried Karel
Wilfried Karel
  • Department of Geodesy and Geoinformation, TU Wien, Vienna, Austria

This work is on the automated, photogrammetric orientation of World War II aerial reconnaissance images. For these near-nadir images, the footprint centers are known beforehand with an accuracy of a few hundred meters, together with coarse image scales and nominal focal lengths, but without image rotations. Since their overlap is typically small or absent, the approach orients the images one after another. A novel rotation-invariant, end-to-end CNN image feature matcher finds homologous points both in an aerial image and in an iteratively refined detail of a present-day orthophoto map, initially extracted according to the given metadata. This results in automatically determined ground control points whose heights are interpolated in a likewise present-day terrain model, and which serve to estimate image orientations in a bundle adjustment. The image orientation quality is assessed by projecting manually observed ground control points into image space and comparing them to their likewise manually observed image positions. Hundreds of images of various scales are evaluated, featuring cloud and snow cover, long cast shadows, dust, and scratches. Despite the large gap in time between the aerial and reference data sets and respectively large changes on the ground and in appearance, the approach results in an RMSE of manual ground control points of less than 1mm for over 60% of the images.

How to cite: Karel, W.: Deep Georeferencing of WWII Aerial Reconnaissance Images, EGU General Assembly 2023, Vienna, Austria, 24–28 Apr 2023, EGU23-17437, https://doi.org/10.5194/egusphere-egu23-17437, 2023.