Flight map / DD photogrammetry processing

A newbie here (just a couple flights) so forgive my very basic question. When one flies a DD mission, and uploads all the images to DD for processing, is this processing merely photogrammetry or does the alignment of the images take cues from the flight map, to help the photogrammetry figure out roughly where each photo “should” be located? In other words, does the DD flight map for the mission tell the software approximately where each photo should be located, before the photogrammetry goes to work to make the blending precise?

Reason I ask? Just flew a mission over a forest and ran the photos through another photogrammetry app as an experiment. Since the treetops look like close-ups of broccoli, it was a mess. App couldn’t figure out anything except where a few buildings were, despite a high degree of overlap.

Interested to learn.
Thank you!

When a mission is flown (doesn’t matter which app is used, DD, Pix4D, Drone Harmony, etc) you create a mission that sets the boundary and altitude with the forward and side overlaps. This creates the flight path that gets uploaded to the drone.

During the mission flight the drone will take photos based on the overlaps selected. The GPS data is stored in the photos properties (exif data).

When you upload the flight photos for processing DD sets the locations based off the exif GPS data and will match identifiable pixels of all the photos (based on overlaps) for that area. This is how a 2D photo can create a 3D model, based on approx 10 to 16 photos (based on overlaps) of that point in all the photos.

As for trees and buildings, you have to make sure your flying at the appropriate altitude to collect all the data needed, approximately 1.5 times the height of the objects. Trees are hard to get good photos that can be stitched together due to wind and the movement of the tree.

Hopefully this helps explain what’s going on.

Greg, thank you so much for the quick reply, both interesting and informative. The day I flew the map was windy, but the forest below was California oak trees which don’t move much. I was in excess of 1.5 times the height, but the still may be challenged by changing sun as clouds were moving. I took all the photos into Adobe Lightroom to “equalize” them as much as possible for the exposure and contrast variations, being sure to retain the metadata, of course. Based on your tips, I think I’ll put them through the DroneDeploy processor and see what I get out the other end.

When you upload them in DD before the actual upload process it will read all the photo exif data and place a dot on the screen. At that point you can see where your photos were taken.

Once the Map is generated you can view the wireframe of each image as well so you can see the overlay to each image.

The key to photogramerty is more photos equal better results, which means up the forward and side overlaps. Of course this will increase the mission time and could require multiple batteries to complete the mission. Flying too high can cause less than stellar results. Example: at 80’ you can see a quarter on the ground, at 250 you see a blurry stick the size of a forearm. So it depends on what quality results you want in the end. If you want great details by zooming in then fly lower otherwise fly high.

Thank you SO much. I’m learning. Here is probably my last question. Once you see the overlay with all the assorted photos in place, can you manually “deselect” (omit) certain photos which you suspect may be causing issues and reprocess to get a better final 3D model?
Thanks!

At the photo overlay screen of the upload process, you can’t directly deleted a photo dot but you can resize the bounding parameter (solid blue line) and exclude areas. You can crop the final mission Map after the processing is complete as well by clicking the (i) next to the date of the mission (left sidebar).

Remember the more photos the better chance you have on getting the Map built with no holes or distortion.