We just discovered something very disappointing about DroneDeploy. For some of our clients, they just want a 2D Orthomosaic photo. Others also want a 3D of the property in addition to the 2D Orthomosaic. What we have found is when you include the oblique photos for generating the 3D, it significantly degrades the quality of the 2D orthophoto when you do a comparison. However it is impractical to share two separate DD projects with the clients, one where we don’t upload the oblique (side angle) photos and the other where we included all photos for 3D quality.
Here’s some zoomed in comparisons to illustrate what I’m talking about. The project with just the nadir photos is on the left. The project that also has oblique photos is on the right. Is it too much to ask for the software to render an accurate ortho without having to leave the oblique photos out so the client can view the best quality orthophoto and 3D model all in the same shared project?
For those that take a glance and think they look pretty much the same… click the expand in the lower right to show the photo full screen and look on the top photo at the big bush in the corner of the parking strip. Also, there’s a single white construction fence around the property. On the left it correctly looks like one fence, on the right it looks like 3 fences. Also on the right photo the white utility box next to the long red strip is correctly pictured where it is obfuscated on the right. So is DD essentially not smart enough to determine which photos were taken pointing straight down and only use those for the 2D rendering, or, employ some other sort of error correction for the 2D? Otherwise we are now going to have to share two projects per job with the client and tell them to ignore the 2D on one, and ignore the 3D on the other. Or at least add the option to designate which of the uploaded photos you want to use for the orthomosaic. I doubt many people, if any, have done this type of comparison. We’re not the type of company to say “it’s good enough we’ll just live with it.”
It’s possible that adding obliques can negatively affect the output, but it’s rare. I expect you already know, but for other readers: Orthophotos are not made up of stitching top down photos together, at least not directly.
Instead we first build a 3D representation of the scene using pohtogrammetry, then project (flatten) that 3D representation back to a 2D image. That’s why in a good quality orthomosaic image you don’t see the sides of any buildings, as if it’s a single image taken from space with a super telephoto lens.
As a result, adding oblique imagery usually improves the quality of the ortho, particularly near buildings or trees, because we are better able to render the edges and sides of buildings.
I can’t find your account to check this out - could you email firstname.lastname@example.org so they can take a look at what’s going on for you? The story might not be as simple as it looks at first.
Is there an article anywhere describing how the 2D orthophoto using unobstructed 1in/pixel nadir images can be improved with oblique photos? I could see maybe in the case part of the edge was obscured in the nadir top-down view by tree branches or something, but if nothing is obscured the edges should look the same as they are in the source photos.
We’ve seen this issue happen with a variety of test jobs at different locations using different shoot plans so I’m pretty sure it’s not an issue with the source data. In fact, I’ve also seen the issue with other programs like Maps Made Easy to a certain degree. So it’s not unique necessarily to DroneDeploy.
What you are claiming about the obliques improving the 2D view, while sounds nice from a theoretical standpoint, it’s the complete opposite of what I’m seeing in real-world applications. I would love to see two actual jobs, one without the obliques and one with where the 2D on the one with obliques looks better. Until then, I’m very skeptical that this claim is still just theoretical and/or really only applies when portions of the top-down view are obscured but visible from the oblique view. But that’s not the situation I’m discussing here.
We’ve found photo stitching software like PtGui produced as good if not superior 2D Orthophotos than any of the Photogrammetry applications. The lack of a 2D stitch function to me seems like a missing tool in the image processing toolbox - but I can see how it would be left out if theoretically your method with photogrammetry then flattening is supposed to produce as good if not better results. I just don’t see any actual evidence of that.
But I guess there should be hundreds of people on this forum that could re-run a job without the oblique photos and then compare the 2D Ortho. Then we could come to a consensus as a community what is the real-world case here.
Here’s another comparison I just ran of another job. I don’t remember the specific parameters off the top of my head but it was something like 1in/pixel nadir at 230ft with 70% overlap/sidelap. Then two orbits at 230ft and 150ft and then a U shape at 100ft. So lots of images with different angles and perspectives at different elevations. The 2D on the job I excluded the angle shots and only processed the straight-down photos looks WAY better. It’s as if you are adding geometry to the edges of the building that’s not actually there, having the opposite effect of what the claim is.
I have noticed this distortion on flights that only included nadir photos but have only include oblique photos on a few jobs: not enough to establish a consistent pattern. I can tell you that I have seen more 2d distortion on recent jobs than on those shot several months ago. Not sure if this is a coincidence or a pattern.
So I’ve also been in discussion with support over at Maps Made Easy which offers the same type of service as Drone Deploy. I actually have him a link to this thread and said I’ve seen the same phenomena with Maps Made Easy. Here was his response…
“If you are doing orthophoto mapping you should only need nadir images… Your best bet is probably to collect 3D data for detail areas separate from your orthophoto overview.”
Maybe I’m just in a small minority of commercial operators but it seems both these companies have missed the boat from the perspective of how do we service the end client best and instead are more caught up in technical features and details. It is not unreasonable to expect an end client that wants both an orthophoto of a location and a 3D model. It doesn’t make sense to have to share two different DD jobs with the client for work done at the same location just to insure the get the best quality 2D and 3D. But that is in fact currently the case.
I was a software developer before a drone operator and I can see it would be easy to add an optional setting that allows the user to specify that a subset of the photos should only be used for the 2D. Yes, in the backend, there’s essentially going to be two processing jobs, one 2D and one 3D. But then the end client can get both views in one shareable link. Otherwise we have to tell the end client to ignore the 3D button on the left on this job and ignore the 2D button on this job.
Heck maybe 90% + of the DD users out there are just using it for fun/hobby or doing very simple things and so that’s why apparently the feature set hasn’t been fully thought through from an end-client of commercial operators perspective?
I have seen this happen on many maps in the past. One thing to remember when taking oblique images is that you do not want to capture the horizon in the image. This will cause a large amount of distortion within the map which you can see in the side by side comparisons you included in your previous post. I would recommend taking all Nadir images for the 2D map and then including a few oblique images from around half the altitude of the Nadir images at an angle of attack that does not include the horizon.
One thing to mention about obliques is that they have a diminishing return. The goal is to take just enough to capture the area of interest. You can think of the drone as a spray can and the images as paint. You really only want to apply one coat of paint to structure you are mapping. Some of the best 3D models I have ever seen only had around 50 images.
3D mapping is something that takes a bit to figure out. Making 3D models is more or less when you get into the art of flying the drone.
Our computer vision team have looked into this phenomenon in more depth, and we think there is a solution that we can implement on our end. We expect to release an update within the next couple of months that should remove these artifacts so that you can have best of both worths: a beautiful 3D model and a crisp clean 2D Ortho.