Can estimate how much data is missing?

Hi all,

I will try te be as clear as I possibly can with this question, please bare with me and let me know if i’m missing any important details. All the questions refers to visual elements and ortho mosaic maps.

Customer request
We have a customer who wants us to map large boddys of water to monitor the development of alga blooming and at the same time analyze objects on the water surface such as bois, fishing nets, islets etc.

Testing
Since water is notoriously hard to map we have done some testing and actually got really good results where the map consistently stitches the whole area together see screen shot 1.


However when I zoom in on the map I can see that it’s (obviously) not perfectly stitched together. In screen shot 2 (zoom in of the ortho mosaic map) there is a boat where small parts of the boat is missing. In this case it is very easy to assume how much data has gone missing.

But when I zoom in an area just containing water surface (see screen shot 3) we can still see that the map has not stitched properly along the red line, but in this case I do not know how to estimate how much data is potentially missing?

Goal
We want to be able give the customer a rule of thumb. For example “When the generated ortho mosaic map is complete, meaning the full area mapped has been stitched without any holes in it you can expect it to contain 98% of the visual elements compared to the 100% if you where watching each individual raw photo.”

The answers we are looking for

  1. Is it possible in some way to analyze maps and com up with a reliable estimates of how much data is missing when a map is not properly stitched?

  2. Ideas on techniques to get more accurate stitching over water.

Thank full for all the input I can get from you guys!

1 Like

That’s pretty awesome that you got it to stitch that well. As you said water is notorious for causing problems mostly due to the lack of repeatable features that can be used as tie-points. How does the elevation map look on this? Are you seeing a bunch of small pits and peaks? I have had the most luck flying higher over homogenous subjects like this.

As far as missing objects, none should be missing if they aren’t moving. When ghosting like that happens it’s because the object is visible at that location in a couple of photos and then it’s just water. You’ll see this with cars if they are travelling the same route as the drone and many times can get multiple ghosts. This one looks like it caught it in different positions in each row as they overlapped. My initial thought is that instead of a percentage claim it should probably be more of an understanding that when objects move their data gets lost. My guess is that the object needs to be located in at least half the images that are taken of that spot which would normally be at least 5 photos with proper overlaps.

As for the extreme visible stitch there is an obvious conditional difference between the sides. It appears it was more windy on the right and the lighting even looks different. Was this at a different time or maybe a battery swap?

Thank you for your reply Michael! I was also positively surprised how good results we got when I have read about so many people having huge problems with similar tasks. We are in Sweden and light is starting to get limited this time of year giving us less harsh reflections which I think benefits us when mapping over water. Ps, when I look in the meta data actual elevation images was captured at is 99 meters. As far as regulation goes we can go 21 meters more (120 meters). We might try this and see if we get even better results. Also we will do more tests using polarizers.

The elevation map looks some what good I would say as well as the point cloud (all though in this case we are mostly interested off the ortho mosaic 2d map and the visual elements) . When I do a measurement across the surface it tends to vary up to 10 meters where it shouldn’t vary at all because water is obviously flat :slight_smile: See screen shots:

So i’m trying to verify I understand what you are saying and translate to this test i’m referring to. My understanding is that the stitching of the water surface will never stitch perfect (unless the botom is seen with visible elements) and it will have visible stitches because the whole surface is always under constant movement. But for example, say if we want to count bois in a specific area, these will most probably be seen in the 2D map as long as they stay stationary enough?

The whole mapp was actually flow in one flight whit out any interruption. It might be that the sun was breaking trough more in some of the shots I cant unfortunately recall this exactly but it was scattered clouds the day we flew.

This is what I was expecting. Even though it might somewhat stitch together visually the elevations are near impossible to get. They have been making a lot of improvements to stitching this kind of content so we don’t see any of the huge pits or spikes that we used to.

Correct. They may not look very sharp, but they should be recognizable in general shape and color.

Now that I think about this depending upon your angle of attack in comparison to the direction of the sun will drastically change the look because of the high reflectivity. I think a polarizer is definitely worth a try and maybe even some image post-processing to normalize the exposures. Are you shooting on auto?

Yes, I was also actually surprised how it was not totally off.

Thank you for clarifying. I did not think about this concept befor and this is truly valuable information going forward with our testing. I wonder though how much an object kan be moving and still appear on the 2d map? And what factors effect how much objects can move? I’m guessing for example, the higher altitude you fly at and the wider angel lens you have should give less resolution but also result in an object appearing to move less in the image as opposed to having the object very close up in the images…?

Do you have any suggestion on how to go about with this? Initially what pops in to my mind is something like this:
Say the grid pattern of the flight mission is oriented N/S, you get one typ of light and reflection going north, then wen you go south you get a different. In this case you could make an adjustment preset in for example photoshop or camera raw and apply to all the south images so that they all match better.
As I am writhing about this I actually noticed that the map (on the screen shots above) is striped. As if north and south passes consistently have the same but different to each other lighting characteristics.

Do you know if there is a easy workflow available to download all the images from a map in the explore section? And if they would then still be the original quality? I want to see if i can improve the quality of the map with post processing but are working remotely and don’t have access to the images.
I saw there is an option to download images from the share function, but did not find a way to bulk download the entire image set in a convenient way?

Screen Shot 2020-10-08 at 11.24.38 AM copy

That’s a good question. I’m guessing it may have something to do with the size of the object and the shape period in the case of the buoy I would think if it was within the diameter of itself and repeated photos that it would be okay, but for something like that boat that could spin on center maybe what happened in your map.

I’m not sure about how on the fly correction would work unless you have a really fancy camera, lol. My initial thought would be though to not run directly into and away from the sun. If you could run in a perpendicular line to the rays it may cut down on that quite a bit. This makes more sense with you noticing the exposure difference between North and South lines. I would guess in the US then that East-West directions may be better over water.

There should be an option to download the source data from exports. I haven’t tried it myself, but I’m assuming it is the original images.

yah not flying directly in and out of the shun will probably make a difference. The more I think about it I realize that it could potentially make a big difference when running manual camera settings. This should equal out the light over the whole map substantially. Always a pain in the rear tough when its scattered clouds :grimacing: But I guess the best results will be on overcast days anyway!

Can’t find any option do download the complete image set from the export tab… Nor can I find any documentation indicating if this is possible or not. Will contact DD and check with them if they have any options.

Thanks Michale for some grate input! We will do some more testing now and see if we can get even more better results with mapping over water!

1 Like

…and when the project shape doesn’t fit the direction.

This is where it is on my interface. I had to scroll all the way to the bottom of the Layer list.


Aha, That’s where that little guy was siting. Thank you!

I did some more testing this weekend to compare techniques and results. I have done 3 tests at this location. with our Phantom 4 Pro

  1. One earlier in May with manual exposure, no polarizer 80 meters. For this test the sun was out and reflections pretty hard.
  2. Test from this weekend with manual exposure, no polarizer, 120 meters and lowered the flight speed to ensure no motion blur. For this test it was full cloud cower.
  3. Test from this weekend with manual exposure, with polarizer, 120 meters and lowered the flight speed to ensure no motion blur. For this test it was full cloud cower.

Test 1 not surprising produced the visually least good looking 2D map. But when looking at the elevation map this test produced the least spikes. However parts of the point cloud did not generate at all.

Test 2 produced a much cleaner 2D map. Looking att the elevation data this one has a lot more spikes. However This test has producent points across the entire map. I wonder if the spikes appear more in this test because the mapping engin was actually able to tie the water images together and tried to produce elevation data as where in test one maybe it was not able to produce any points and did there for just guest the elevation all together.

Test 3 produced a very similar result as test 2. The bigest difference I can notice is that objects stand out more so it’s easier to spot details in the map. I was expecting this to produce better elevation data and a more complete point cloud but there is minimal difference. If anything I can see that regarding the point cloud it seems to be a bit more dens in areas with visual features when using the polarizer. But in the areas with no visual features the point cloud seems to be more dens when flying without the polarizer.

Not sure what to conclude from this tests but I thought i’ll share my findings :slight_smile:

2 Likes