A newbie here (just a couple flights) so forgive my very basic question. When one flies a DD mission, and uploads all the images to DD for processing, is this processing merely photogrammetry or does the alignment of the images take cues from the flight map, to help the photogrammetry figure out roughly where each photo “should” be located? In other words, does the DD flight map for the mission tell the software approximately where each photo should be located, before the photogrammetry goes to work to make the blending precise?
Reason I ask? Just flew a mission over a forest and ran the photos through another photogrammetry app as an experiment. Since the treetops look like close-ups of broccoli, it was a mess. App couldn’t figure out anything except where a few buildings were, despite a high degree of overlap.
Interested to learn.