Image capture resolution question. (It's weird and I'm confused.)

This question is going to sound weird, I’m cringing at myself as I ask, but here it goes.

I had been flying my phantom 4 pro v2 with the DD planning app on my ipad. The photos that were captured were always 4864 x 3648 pixels. I never thought much of it and didn’t crunch the math, but that is 17.7 megapixels (1.333 aspect ratio.) The phantom 4 pro v2 has a 20 megapixel camera.

My mavic also has a 20 megapixel camera and captures images that are 5472 x 3648. (this is 1.5 aspect ratio.) Again, wasn’t paying too close of attention, but that computes out to a true 20 megapixel size.

Now, due to some ipad vs. DD issues, I tested out the pix4d capture program on Monday, and when flying with that app, all my phantom 4 pro v2 images are coming back as 5472 x 3648 resolution … the full 20 megapixel.

So this is weird, and I’m dumb to be just noticing it now. I haven’t seen anything in the DD app where I would set the aspect ratio of my images or ask for a lower megapixel. Can anyone explain what’s going on or explain what I must have done to get smaller images when flying with the DD app? I’m confused …



First off, you’re not crazy! It will be ok. DroneDeploy decided after some testing that there wasn’t any benefit to 3:2 vs 4:3. They claim that they actually saw some issues with 3:2 that they didn’t like that didn’t occur with 4:3. I can understand the extra width being a sidelap advantage, but I also recognize that the further away you get from the focal point the more images are blurred and the tiny amount of additional write time. You can see part of this when running missions with oblique images. Yes, you capture more territory per shot, but how much of that out in front of the focal point is actually good data? In addition to that I am becoming aware through experience that sidelap is overrated and there are countless missions that I have run at 75-80% when 65% would have been sufficient. All that said I think this decision was made a long time ago and the reasons for the decision may not be as applicable today.

1 Like

Thanks for the background info! I just didn’t know what to think about it. I suppose you can come up with reasons for and against anything, so they probably had some good reasons.

I know that the larger the perspective change in an overlapping area, the harder it is to find matching features (especially when the area is dominated by things like trees or tall corn.) This is why it can help to fly an area at a higher altitude if you aren’t getting a good stitch (less perspective change between overlapping areas, and often more variety of features to see.)

I have been writing my own mapping/stitching software to support my university projects.
We have a use-case (finding a needle in a haystack) that isn’t well supported by the typical commercial outputs of a 3d mesh, or static orthophoto:

We need to fly very low to get the most detail possible, and we end up flying in challenging areas such as forests or mature corn fields, or areas with very steep terrain. We ended up needing to come up with some new ideas in order to create map stitches that worked for our project needs.

Anyway, for my tools, the sidelap matching helps quite a bit. The more image pairs I can connect and more redundancy in the matching set I can find, the better my optimizer does with fitting everything together. When the amount of overlap is sketchy, you can run the risk of not finding enough connections between images which could leave voids in the final result (or in the case of tools like pix4d, the final result could blow up and turn into a big mess.) All that said, DD always did a really nice job with the input data I gave it (sadly our license ran out and we didn’t have funding to keep it current and I lost all my maps.)

I find if you are imaging things that are more marshy/grassy or bare ground or short/early crops, then you can get away with a lot less overlap. Like everything, it all depends on what you are imaging and what you need to get out of the result. :slight_smile:

1 Like

This goes along with the reason why I (and DD) don’t recommend using GCP’s with oblique imagery. The skew from those shallow angles distort everything to the point that even that matches that you do get can be misleading. When you mention matches it is analyzed as images per pixel or you could convey as tie-point so we need to think about the perspectives we are capturing the subjects from with good focus and exposure as importantly as the amount of overlap. I assume you have tried every configuration under the sky, but have you considered multiple elevations and including oblique imagery? I like referring back to @Jamespipe’s metaphor of painting a house. To get the best 3D model possible you need to capture each detail of each subject from all directions. We are limited to what we can do from the air, but think about perspective.

I have changed the category to include How To in order to overcome the website issue with ignoring General Discussion only threads.

Not to get too far off into the weeds (so to speak) but our use case is hunting for invasive plants (aka weeds) using drones. Funding is through the minnesota dept of ag; they have a small group focused on tracking and combating invasive plants in the state. They came to us asking if drone technology could help, so we set up a little trial project that has expanded into a couple year adventure. Currently we are hunting oriental bittersweet. This is an invasive vine that grows up around trees and wraps them like a python. Over the course of a few years, the vine will choke out the native tree and totally kill it, leaving a big rats nest of invasive vine dangling from the original tree core. The interesting thing is that oriental bittersweet has bright orange/red berries that stay on the vine all winter. As soon as the leaves drop off the trees/vines in the fall, we can start flying surveys. However, we need to fly pretty low to see the berries clearly, and we are flying over forest area typically so it’s hard to stitch for all the reasons. Also, oriental bittersweet likes to sneak up the trunks of evergreen trees, so it’s really handy (for us) to so a bit of oblique perspective:

3d models and point clouds don’t do much for us because we need to see all the detail … and ortho photos aren’t perfect either because from the top down you’d never see these infestations.


That is a cool use that wouldn’t occur to most. Flying oblique imagery would allow you to fly lower and still gain proper overlaps, but I would still suggest a higher nadir flight for overall map stitching purposes. This may not be applicable to your case though. The other thing with oblique imagery is that you can have the images themselves to more clearly analyze that type of occurrence. Think of getting 1sqft of the vine in view versus 4sqft from an oblique side view. Also, have you ever thought about post-processing and filtering your imagery to only include the band that you want? In this instance cutting blue and greens. This would effectively give you a black and white map preserving the orange-red-yellow. You could then bring that into an image processing software (or build one yourself) that would quickly locate areas of a certain color gamut. This comes to mind when we “almost” lost an ORANGE drone several years ago. It decided it was going stage exit right and there was nothing we could do except watch it go and then fall. Logs gave us a pretty good idea of where it was so we sent up another drone to map that area, brought the imagery in and cut everything except red-orange-yellow and after analyzing the map found the drone in a tree as it was the only thing of that color in the area.

Here’s one of the images. Not easy to see at full frame, but when you zoom in and pan around… Where’s Waldo?


Yes, all of that that you said. Again our use case is finding needles in the haystack (aka sneaky vines.) So the slightly oblique perspective can be really helpful. Cycling between all the available views (the original pictures) laid out in exact ortho position is helpful (we developed a viewer tool to do this.) We did create a ‘filter’ that tries to high light the reddish stuff. It’s also really good at picking out the tail lights of your car. We can toggle it on/off (not too different from what DD’s web viewer can do), but this is customized to our specific color band. I’ve been dabbling in a bit of machine vision/learning to see if that is productive for pre-screening the pictures to save human effort, but haven’t gotten as far with that as I would have liked.

Cool story about the lost drone! It’s interesting what you discover out in the woods. I found a lost kite, lots of deer–one was looking up and tracking me through multiple images, even on the 2nd pass through the area. Monday I found a bunch of weird foot prints in the ice/snow … they looked like dinosaur prints, so probably some kind of birds. I know that sand hill cranes come through, but I didn’t think any of those bigger migratory birds would be in MN this time of year.


That’s what I’m talking about!

Glad you found your wayward drone! Then you hopefully knew a 9 year old kid who likes to climb trees without asking permission from their parents. :slight_smile:


Saw this in a report I was just reviewing and thought of the edge distortion I mentioned earlier. Note the side distortion and how that would carry on if the image were wider.

1 Like

Nice project and cool, useful pictures!


totally agree. if there’s anything you can update, please do.
also, i’m not such an expert and i was wondering if i may ask you some questions as i can see you do know all these things much better than i do. thanks.

1 Like

@Lieniner not sure if your post was directed to me, but if it was, then I’m happy to share anything I know (or think I know.) Usually that should be a short conversation, but my fingers do like to type. :slight_smile:

This topic is interesting and leads to my question. Does anybody know how to calculate percentages between the red and green in plant health. I want to establish how much bare earth there is to ground cover so pastoralists can decide stocking rates for the :smiley: amount of available feed. DD cannot do this at the moment but thought they may sometime. If somebody knows how to create something that works, I would like to hear from you.

1 Like

Back when we had an active DD license, I thought I saw an area in the web interface where you could draw out your map with a variety of filters/indices. But maybe you are asking about doing something analytical, not just seeing the result?

I have been pondering what it would take to scan through all my images with a machine learning algorithm and flag all the areas that have some bright red in them. I’ve experimented a bit with machine learning (classification) and I have grown a bit more skeptical as a result (not that it can’t be a useful tool, but I’m more and more skeptical of people’s reported accuracy results.)

1 Like

The only ways I can think of how to do it right now without a really specialized piece of software is to annotate them in DroneDeploy using the area measurement tool or to put them in some GIS software like QGIS. You would export the raster georeference JPG of TIFF and then have QGIS quantify using the raster you brought in using zonal statistics. If you are not familiar with QGIS you need to try. It is a very powerful information analyzer and converter for our drone data.

Thank you for your help. Will look at QGIS.

1 Like