UAV-derived change detection without ground control points, an example from the cliff coast of Rügen
- GFZ Potsdam, GFZ Section 4.6, Potsdam, Germany (firstname.lastname@example.org)
- GFZ Potsdam, GFZ Section 4.6, Potsdam, Germany (email@example.com)
High resolution topographic models generated from repeat unmanned aerial vehicle (UAV) surveys and structure from motion (SfM) are increasingly being used to investigate landscape changes and geomorphic processes. Traditionally, accurate UAV surveys require the use of independently measured ground control points or highly accurate camera position measurements. However, in addition to accuracy in an absolute sense (how well modeled topography matches real topography), model quality can be expressed as accuracy in a comparative sense (the degree to which two models match each other). We present a simple SfM workflow for calculating pairs or sets of models with a high comparative accuracy, without the need for ground control points or a dGNSS equipped UAV. The method is based on the automated detection of common tie points in stable portions of the survey area and, compared to a standard SfM approach without ground control, reduces the level of change detection in our surveys from several meters to 10-15 cm. We apply this approach in a multi-year monitoring campaign of an 8 km stretch of coastal cliffs on the island of Rügen, Germany. We are able to detect numerous mass wasting events as well as bands of more diffuse erosion in chalk sections of the cliff. Both the cliff collapses and the diffuse erosion appear to be strongly influenced by antecedent precipitation over seasonal timescales, with much greater activity during the winter of 2017-2018, following an above average wet summer, than during the subsequent two winters, which both followed relatively dry summers. This points to the influence of subsurface water storage in modulating cliff erosion on Rügen.
How to cite: Cook, K. and Dietze, M.: UAV-derived change detection without ground control points, an example from the cliff coast of Rügen, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11735, https://doi.org/10.5194/egusphere-egu2020-11735, 2020
Comments on the display
AC: Author Comment | CC: Community Comment | Report abuse
Have you tried aligning clouds from separate surveys using the ICP or manual alignment tools in CloudCompare prior to applying M3C2, as a way to minimize errors prior to differencing? How does that compare to the method you present here?
Is seems that the method here works best when areas where topographic change occurred are small relative to the survey area. Are you able to comment of the upper limit of change that can occur in the survey area before these changes begin to affect the ability of photos from different surveys to align?
Did you have to be careful to make sure you conducted surveys under similar lighting conditions (time of day, cloud cover, etc.) each time?
Thanks for your questions.
I didn't mention it, but we did align the point clouds in CloudCompare prior to doing the differencing. I also tried aligning them using the multistation adjustment algorithm in Riegl's RiScan Pro software, with no better result. The fundamental problem is that each model contains different errors, so they are slightly distorted relative to each other and will never align well enough.
The upper limit of change is not clear, but I think that the distribution of stable vs. changed areas is more important. In the last example in the display, the river in Taiwan, there are relatively few common tie points detected between the two time periods (only 900, compared to 300k for the total sparse cloud), and none of the points are in the river. However, it seems that having the detected points distributed around the edges of the model makes a big difference. This could guide survey strategy - if you want to monitor an area with a lot of change, then enlarging the survey area to include some well-distributed stable features should help a lot.
As for similar conditions, for both examples we didn't have any choice - we had to plan trips ahead, generally had a very short time on site, and had to survey no matter the weather. I have been continually amazed at how little it appears to matter. For example, in the display, I show a comparison April 2018-May 2018, which aligned very well even though April was overcast with no direct light, while May was bright and sunny with strong shadows and sun glare. Overall, I haven't seen a relationship between consistency of conditions and success of alignment. I can also note that we have upgraded our UAV several times over the course of the project, and aligning surveys from different UAVs also works quite well.
Happy to answer any further questions!
Hi Kristen and Brett,
I have also similar questions about the co-alignment methodology - so I took the liberty to join this discussion.
I wondered what the difference (detected change) is between stable areas when not using co-alignment methodology and when applying co-alignment. Could you use this value as a measure for level of detection, since you don’t have ground control points (in the case of Rügen)?
We applied the co-alignment method as well on an alpine talus slope, based on your 2019 paper (which was really helpful, thank you!!). Because of bad distribution of ground control points, we experienced large deformations in the upper part of the slope (see my EGU display if you are interested). After applying your methodology the RMSE error in our check points went down from 12 to 7 cm (respectively 27 and 19 cm for LOD95%) and the deformation in the upper part of the slope (were most geomorphic changes occurred) got corrected. So I was very pleased with the results! For the co-alignment, we had 5% common tie points (so quiet high), mostly at the edges (stable headwall and stable lower part of the talus slope).
Our 3 years of data collection was done as well with very different light and moisture conditions, and also with different UAV and camera’s, even different flight planning. I can confirm Kristen’s observations, that this didn’t seem to affect the success of the co-alignment methodology.
I am very happy to hear that you had some success with the method! I think I remember discussing at your poster last year? We have not tried a wide range of different environments yet, so it’s great to hear about good results in more settings. Also nice that you didn’t encounter any problems with lighting, conditions, etc.
Yes, the detected change in stable areas is what I use to estimate the level of detection for Ruegen. This is pretty clear for things like discrete rockfalls – basically, a change was only trusted if the magnitude is well above the change measured in the surrounding stable area. I individually inspected each apparent area of change, and measured volumes only of confirmed mass wasting events. Features like the more diffuse erosion I show on one of the slides are harder to evaluate. This is only about 10-20 cm of measured change, and it took a fair amount of looking into it to convince myself that this was a real change. Fortunately, for a purely erosional setting like a vertical cliff face, you know that changes can only go one direction, so you can use comparisons of different survey pairs that overlap in time to have some measure of reproducibility. We have done a LOT of surveys (13 so far), so have a lot of potential combinations to work with.
I didn’t actually calculate an RMS for stable area change over the whole models, partly because trying to automatically define stable area can then become a little circular, because the vegetation really dominates total change and is difficult to completely remove, because diffuse erosion may be going on, and because the error is often not consistent across the models (for long linear features like the cliffs, variation in matching success between one end and the other is not uncommon). But it’s clear from the change maps that not using co-alignment gives high amounts of measured change in stable areas (up to meters). For the most successful co-alignment (like the April-May 2018), this comes down to about 5 cm or less throughout most of the area.
Happy to have any further questions!
Thank you for your answer. 5 cm or less without GCP’s is a very nice result! I think a lot of people working in difficult area’s with UAV data can benefit from this workflow.
Indeed, we met at my poster at EGU19! That’s also how I was quickly aware of your new paper that came out in October and could rerun my data using your methodology (which took several tries and several days, since co-aligning all surveys had > 6000 images). If you have 13 surveys of 1000 to 2000 images, do you than always co-align to one ‘reference’ survey, or do you co-align them all together, having an increasing number of images to align every time? I guess aligning all the surveys together will give a better result, but this will also increase processing time considerably… Any strategies on this?
Another small question; in your paper you mention flight planning done for the Daan River, but flying manual for Ruegen. Was this for practical reasons? Which flight planning software did you use? In my experience, I always find it easier to use flight planning software, especially in difficult terrain…(we use UGCS and I’m very happy about the flexibility it offers). I can see how it would be quite a challenge to fly manually, over sea, near vertical cliffs with the occasional strong winds..!
I will typically just align survey pairs or groups as needed for whichever comparison I want to make. I think the maximum I’ve done together is 5. I do break the cliff into overlapping segments so that I have about 300-500 photos per survey per segment, which speeds things up considerably. It generally takes less than a day to process everything for 2-3 surveys, which I think is reasonable, especially given how little of my input it takes (compared to clicking on ground control points). But in general, this is the biggest downside of the method – that the final products are always relative and you can’t compare things that were not processed together. Even always having one reference survey wouldn’t help perhaps unless you keep the camera positions and lens parameters for that survey fixed. This I haven’t tried, but could be interesting.
About the flight planning, I did manual flying on Ruegen just because it seemed a lot easier than trying to get a flight planner to do it properly! Because it’s not a grid, you can’t really go wrong in terms of coverage – just fly up and down the coast, keep the camera pointing at the cliff, keep the full cliff face in the photo, and make sure all the flights overlap. Apart from the wind, rain, and connection difficulties, it’s actually pretty easy. This does mean that we don’t have uniform GSD over the cliff – where it’s higher, I have to fly farther away to keep the whole thing in view so additional flight lines aren’t needed, but for us this was a reasonable trade-off for efficiency. We generally drove up just for the day, and were racing the clock to complete both flying and maintenence of our seismic observatory (especially with a 15:30 sunset in the winter!). Plus I can adjust in case of things like sun shining into the camera (we are always looking to the west, and usually flying in the afternoon, so this can be a problem on the rare nice day), wind drain on the battery, etc. Also if I spot something interesting, like a new failure, I can take some extra photos.
For the Daan, this was a super straightforward grid mission at a single elevation with the camera nadir (I also do manual flying here for oblique photos of the river gorge walls, but for simplicity I didn’t use them for this example). I don’t remember which software I used for these exact photos – either Map Pilot or DJI GS Pro. I’ve never tried one of the more elaborate mission planners, could be something to look into.