It’s something that inevitably needs to be done any time you or your client are interested in knowing the contour and drainage characteristics of the actual ground. DTM’ing (Digital Terrain Modeling) is the process of removing any data that is not a ground point from the DEM or in our case LAS Point cloud. This provides nice smooth contours of what the terrain is actually doing while excluding things like structures, vehicles and trees. The DTM’ing process can even mitigate a fair amount of vegetation as long as there are enough low points in the scan. Softwares like DroneDeploy, Pix4D and Metashape promise to create DTM’s through their processing but from first hand experience I can say beware. Also, those solutions are not capable of identifying, preserving and/or completely removing existing stockpiles. If you plan on really using a DTM verify the heck out of it.
In construction we use this process once we have cleared the land in order to document how much actual topsoil we have stripped and to compare against the topographic survey that was provided to us pre-construction. This allows us to get the real situation that we are going to run into as far as what material we will need and need to get rid of. We also have to have a DTM when do progress topos to verify grade at a specific part of the process or the amount of work to be completed.
If you have been around the forum for long enough you probably know that we use Carlson Precision 3D Topo to create our DTM’s. There are very few point cloud editors out there that give you access to manually control the aspects of the DTM’ing process that exist in many of the algorithms that photogrammetry software uses. The software recognizes low-points according to the parameters of area to include in the analysis and then removes any points of a variance outside of the tolerances that you set.
It starts with a Bareground filter. A good starting point is a 1ft grid in a 15ft window. This means that the software looks at a 1ft grid, determines the low-points in that grid and then averages across the 15ft window. It then determines any point that is above that average by a max distance (we use 0.25ft) to know what is not “ground”. You can then run what is called an Outlier filter to capture any points that are far away from any group of points to remove things like tops of buildings or trees. These settings are what I have found to be the most efficient for getting ground in an efficient amount of time. You could run a 10ft Window, but there will be more cleanup at the end and I really haven’t seen significant gains in ground accuracy.
Here is a DSM (Digital Surface Model) point cloud of a property that we are about to begin construction on. As you can see it is heavily wooded. We needed a rough idea (+/- 0.5ft) of what the contours we because it had already been determined that the survey was challenged and that material import/export was going to be very important.
Here it is after the first two filters. This process took 8 minutes to process. The nice thing is that it retains the removed points in a separate point cloud so that they can be reprocessed or brought back if needed. Allot of points have been removed but you can see that there were still points captured in any small spaces that will stitch across and in this cloud the maximum gap ended up being 55ft.
The last step is to view the points in a colored by elevation mode which allows you to easily remove any points that were not captured by the Bareground and Outlier filters. You can see that there are some very tops of trees that were not removed. You could get these with the filters, but the larger the Window gets to more risk you take on removing needed points especially when areas have a large amount of grade relief.
Once the last bit of points are removed you get a nice gradient in the colorization and are ready to create the mesh surface. Manual removal is similar to Recap or Reality Capture and is a bit tedious, but this 75 acres took 25 minutes.
Bottom line is that using a PC editor with these capabilities allows you to maintain much more actual ground data because you can utilize the native point cloud that hasn’t been decimated. So far it has been capable of smoothly displaying up to 100 million points. This cloud in particular went from 32 million points to 11 million while maintaining nearly all the true ground points.