Large changes in pile volume after adding non-pile images to map

Hello,
Yesterday I captured about 100 images of some aggregate piles and started them uploading via my laptop in the back seat of the truck while driving to next work site. Upload completed successfully but showed only 46 images of 98 uploaded. I assumed that was a glitch.

Today I went to calculate the pile volumes and got 9200 and 2005 m3 for 2 piles at the N end of the map before noticing that the bottom half of the map was missing. It had in fact only uploaded 46 images. Image capture for the 2 piles above, which were my primary target, was complete but there was another area to the south I wanted to measure, so I uploaded the remaining images.

After the remaining images were uploaded and processed I returned to the project to find that the volumes for the first 2 had decreased substantially. Now they report 7230 and 1377, respectively. That’s more than 20% difference. In the processing report the 9200 m pile was completely blue (ie highest reliability) before the reprocessing. There was one corner of the other pile that was green, but it was mostly blue also.

I did notice in the first report that the camera calibration showed 15% variation, which I thought was wierd. In the second report it is 0.02% variation. I presume that that means the second set of numbers are more accurate, but something smells bad to me.

Now I

What orientation of photos did you use? Nadir, obliques or both?

They were all nadir, standard lawnmower pattern

1 Like

That’s good. I could see how something like this would happen if it was oblique images because they have such an extended field of view, but the other thing that is really odd is the large disparity in the error figures that you are seeing. It really shouldn’t matter how many images there are. I will say though that there used to be a limit on how few of photos had to be uploaded, it was something like 50 images. Have you emailed support yet? They’re the ones that can look at the details of the processing log to see if there is anything strange.

No, I can’t seem to find an email for that on the website.

1 Like

support@dronedeploy.com

@Andrew_Fraser

Cheers, seems obvious but they certainly don’t advertise it!

2 Likes

Haha, it’s actually surprising to me as it seems it would be a prime target for spamming. They must have a really good filter…

The 15% camera error in the report correlated to the 20% error in the measurement would be expected. Certainly the second measurement is much more reliable.

If the focal length of the camera estimated during photogrammetry varies more than a couple of % from the book value, it generally suggests a scale issue (assuming the camera was not modified, or photos taken while zoomed). If the principal point is substantially off center, it suggests a skew/ distortion issue.

Also note that including some oblique images in your nadir surveys will generally increase your overall accuracy as it locks the focal length via a geometric constraint. Pure nadir imagery is more susceptible to bowling effects and focal length issues when GCPs are not employed. (Engineers might consider a triangulated truss structure to visualise this benefit)

Just to tag on, make sure the obliques are at least 60deg. In our experience anything less than that captures too much background noise and introduces allot of distortion the further you get from the focal point. The only exception to this is if you are shooting a structure which normally blocks that background.

In my experience, this is only true on large sites when you fly too low. It doesn’t affect stockpile unless they are larger, maybe around an acre. You can compensate for this though by running an additional flight at a higher altitude.

Here’s a paper on the topic that may be of interest

http://www.close-range.com/docs/Mitigating_systematic_error_in_topographic_models_derived_from_UAV_and_ground-based_image_networks.pdf

1 Like

While this is true we’re talking about stockpiles and the paper is focused on entire maps. Judging from their verbiage probably much larger maps than the majority of us fly.

Hey James.

The conclusion of this is that the non-calibrated camera is basically to blame for the distortion. And, the poor in-processor calibration occurs because of “dominantly parallel viewing directions” otherwise known as parallel flight lines.

I’m curious. Couldn’t a software engineering team with your understanding of the problem come up with a canned flight mission plan specifically designed to capture angles of the surface that when taken together could be used to compute a reasonably good correction for a given camera. And then be able to save that correction file to feed back into standard grid missions in order to define a good camera correction going forward that would greatly improve the overall rectification?

Cheers,
Dave

There’s a whole lot about this article that I don’t agree with due to experience, but the bottom line is that in a scenario of very large tracts which cause very long parallel lines the majority of the issues come from (1) camera distortion and (2) lack over proper overlaps. Most people that fly these kinds of missions were about quantity over quality which is easy to do when you are trying to map 1000 acres. They opt for high altitude with as few of batteries as possible. The whole parallel lines thing is a misnomer though because we forget about the oblique piece of a nadir image. About 30% of GCP’s are tagged on the outer third of the images which means that you are getting a ton of data 30-40 degrees out from center so no it’s not really just an orthographic image being captured hundreds of times from the same vantage point. That all said if you fly with good lighting conditions and an optimal altitude and proper overlaps you will see very little of this. Almost none with GCP’s. Back to the OQ though we are talking about stockpiles so none of this is relevant.

Hi Dave,

Yes absolutely - for example at the end of a nadir mission we could add in a couple of short oblique capture legs - just increased complexity in UI and explanation. If you want to replicate the effect just capture few manual shots and add them into your upload.

Currently we actually have an in-photogrammetry solution for this as we know that the barometric altitude sensors on the drone are generally really good - if the image capture locations look planar and are mostly nadir, they probably are planar and mostly nadir!

Thanks,

James

Hi James.

I was more interested in a solution that would determine a good camera calibration and then be able to store and use that calibration going forward without need for additional in-process calibration on every mission.

The way you describe the current method, It sounds like a terrain aware mission where the cameras were not planer at all would be problematic.

@Jamespipe A standard lense calibration per model is already used right? The camera distortion map and camera location calibration are during processing. Every processing software I have seen has a stock polynomial calibration for the lense. The distortion map and location calibration can be affected allot by conditions affecting the drone attitude. A few degrees here and there can make a noticeable difference in the processing reports I have seen.

Ah I see - yes generally camera intrinsics don’t vary much from the book values, or from map to map, so there is a potential to add a feature that allows you to photograph a calibration pattern, or for us to reuse intrinsics from a GCP map on non GCP maps.

James

1 Like

To the terrain aware point, while you lose out on planar constraint, you likely gain in some 3D structure.

James

1 Like

Yep, you got it. Having as good of correction as you can get rather than processed on each reconstruction may not be a huge deal. But it’s pretty low hanging fruit that is currently left on the tree. And, it’s benefit would perhaps go up if other factors are degraded in any way.