How is elevation data is calculated from an image?

Can anyone explain to me how the elevation data is calculated from a drone image? How is this information obtained from imagery? And, in the case of a hill, the elevation change is tracked throughout the entire range of the hill, even though only a few images may have been provided to provide the data input.

How does this work? I am not looking for the exact math, but I am looking for something a bit more detailed than the drone has the x, y, and z coordinates and extrapolates elevation from a referenced object.

Hi Henry,

DroneDeploy’s blog and Support Docs have a lot of helpful real-life use cases and information. I’d start with this post: There are some links in here that should help you as well.


I read that article, however it does not answer the basic question: how does DroneDeploy infer elevation from an image taken by a drone? I’ve also done some research googling photogrammetry, but have not found an answer I can use in a presentation.

Ok, so this linked support doc at the bottom of the post doesn’t help? Just wanted to provide a primer with deeper dives: Please let me know if this makes sense. Thanks!

The information you have provided tells me what I can do with the data obtained through DroneDeploy and how to use it with the Elevation Toolbox. What I am interested in is how is the data obtained in the first place.

Someone is going to ask me how a drone is able to calculate the height of an object in an image and I would like to be able to explain that.

I am looking for a simplified explanation of something like this:

This article may lead you in the right direction:

Photogrammerty works by finding similar points in a set of pictures to align altogether as one. Since you fly the map at a constant altitude you can track the changes in elevation of the surface below, so the easy answer is math. The processing figures out the difference. But the bigger question is how accurate is the elevation data when you aren’t using LIDAR? The elevation map created looks like a LIDAR map, works like [this] (


Fantastic answer and video, @JoeyShea! I can apply the explanation of the LIDAR based approach to my developing understanding of photogrammetry. Since photogrammetry is using the difference of the same point taken from two different images, the elevation of that point - as referenced by the known elevation of the drone (which is acting as a constant) - can be inferred.

Two points and an angle is what is needed to calculate where the third point of a triangle would exist, which ends up referencing the elevation of the imaged object or surface. The mental leap in this is that the two bottom points of the triangle exist in two different images.

1 Like

And, the same concept is used in astronomy to measure the distance to a star! Reference:

Thanks, Joe. That is super helpful. Sorry I misunderstood original question, Henry.

I had the same question and found this article to be pretty handy: