Arctic Blast 2021

Going on a week of freezing temperatures with ice and snow… in Texas! Rolling power outages on about 2 hour cycles for the last 2 days. We are blessed to have gas utilities and fireplace.

I thought it might be good to have a place for us all to share experiences and keep in touch while we are stuck inside! No drones in sight for probably another week.

Better here on the East coast in NJ; 48 degrees with the snow from the last few weeks melting away. Staying indoors to manage risks and working on ways to efficiently process point clouds with a billion points. Looking really good so far by using selective decimation; central regions of importance are decimated less than outlying regions. Right now applying this to Lidar scans of structures but could also be useful with our clouds from drone missions. A top goal is performance so that billion point clouds can be imported and displayed in under a minute. Having great fun working on this in Rhino.

Regards,
Terry.

Good to hear! Our main processes are separating points to their respective clouds according to their classification and/or function on the site and then using inclusion/exclusion areas to filter each cloud accordingly. We then either rebuild a master cloud out of the optimized parts or viewing them all together as referenced clouds. This also allows us to replace updated regions rather than continuing to reprocess the entire base cloud.

Michael,

How big are your largest point clouds? How many points do they contain?

I am finding a limitation in Rhino due to it having to fit a single-precision representation of the cloud in the VRAM of the GPU card for display. This limits it to displaying clouds with less than 225 Mpts on a GPU with 11 GB of VRAM. But I understand that other programs do not have this low of a limit.

So are you able to view and navigate around clouds with a billion points in your tools? If so, what are the tools that work well?

Regards,
Terry.

Flew a Autel E2P other day, was sunny clear (cold & windy) -25 to -35F wind chill, was a rather quick flight. Did a quick P4P DD flight other day when temps got up to about 0 with no wind. Common theme, quick flights…

1 Like

Our point clouds are not near that big with the larger ones being around 300-400 million points, but the workflow is the same regardless. You can knock files WAY down without too much degradation of accuracy is you know how to edit the individual aspects of the areas correctly. Everything is done in Carlson Precision 3D Topo and ends up as a surface in Civil or as a remastered point cloud in Navisworks.

Why do you knock down the size? Are they slow to handle in your tools at full size?

Do you knock them down with a target in mind that works well with you tools? What is the typical size you find efficient to use?

Regards,
Terry.

1 Like

For us it is more about site optimization and DTM’ing. Minimizing “flat” triangles on the ground and removing noise around structures makes a big difference. CP3D runs according to a cell, window, slope and vertical delta values. It also allows you to draw breaklines and allows the introduction of ground survey points. A good file is usually 10% of the original count, but retains detail in critical areas where a standard decimation or grid resample doesn’t.

The only trouble we see with performance are usually with insufficient computing power. Pretty much anything with 16GB of RAM and a Quad-Core processor works fine.

Ok so I am getting the picture that you work with what I consider tiny clouds, under 50 Mpts. I like this size also but it is difficult to get there for many Lidar users that expect good detail on all their interior scans of buildings. They have so many scans (50 is not unusual) and limiting each to just 1 Mpts is not going to fly. So I am working on providing them a responsive experience by developing high performance applications for processing the data and on getting the Rhino developers to expand their display capability

Help is on the horizon with the new Nvidia RTX 3089 that has 24 GB of VRAM (and a year or so down the road it should be reasonably affordable). This will more than double the size of clouds viewable, from 200 Mpts to about 500 Mpts (which becomes around 2 billion with my adaptive decimation scheme).

Great to learn about how you handle these large datasets.

Thanks,
Terry.

1 Like

Ok so now understanding that you are talking about building scans and not photogrammetry point clouds all I can say is good luck. We have the same problems with terrestrial scanners and most of it boils down to not having good scanner operators and too much information being captured. There is a huge gap in the industry between people that can just scan, people that can process and anyone who can do both. Very unlikely.

I am finding I can do a lot in pre-processing the data to overcome difficulties with the scan data volume. The scanner location is recorded in the e57 files typically used to store the point clouds. This immediately enables filtering the exterior scans, which dominate the data volume, by distance to eliminate clutter outside the area of interest. In addition, the exterior scans can be more aggressively decimated as they are less important in the BIM/CAD FLOW. After this what remains becomes much more manageable and still contains all the critical information. Still early days in playing with the data and I expect more improvements as I progess further in the developing the tool for Rhino users.

Again, great to learn about your experiences in this domaine.

Regards,
Terry.

Yes, the maximum scan distance parameter is a good way to limit external unneeded data, but that should be done in the scanner and touched up in post-processing. More effectively the operator needs to correctly set their LoD and plan their overlaps correctly. Usually an LoD of 10mm is plenty dense. We typically run 15mm and then change our overlap plan to capture more detail on objects that 15mm does not sufficiently capture. The only time we ever do external scans any more is if it is an extensive remodel between structures. Otherwise a properly configured drone scan with a few key ground observations will do just fine. The laser scanner then comes back in once the demolition is done.

For other reasons on the side of visualization take a look at Cintoo Cloud.

Hey guys,

Wishing you and your families all the best as you recover from these storms.

We are expecting to add LiDAR point cloud upload, visualisation, classification and measurement later this year. Would be interesting to see some of the more extreme examples to test our 3D tiler and the limits of display in WebGL, do drop me a private message or email if you are able to give us access.

There are some extreme subsampling methods around for point clouds - though generally these focus on removing points in planar areas - requiring some clever visualisation tools to fill in the gaps. I believe they may require point normals or estimated normals. @SolarBarn are you able to share your decimation method?

James

For decimation, so far I just do what I said above: the exterior scans are filtered by distance and the remaining points are decimated by a fixed factor. There are many other possibilities but it is still early days in my explorations of these.

I will check with the owner of my largest set of scans to see if I can give you access. It does not contain normals so you will have to construct those. Currently he waits 3 hours to load the scans and generate normals. My goal is a few minutes. Long ways to go but again many possibilities to get there. Maybe Michael knows a tool that will load 10Mpts clouds and generate normals quickly?

Regards,
Terry.

1 Like

One of the solutions that is developing is using gaming engines for very dense and large point clouds. This becomes a bigger factor particularly when visualization is involved because of the superior color matching and shading of the gaming engines. As far as an easy way to compute normals unfortunately there still isn’t a sufficiently robust solution to produce them in the manner that we use them so we are all still developing. We have found that generating normals is also dependent on the material and reflectivity so that is another reason to break down the models so that each characteristic can be dealt with in its most compatible manner.

I came across this article which was helpful to me since I know little about Python and you may have already been through this, but it may still be helpful.
https://towardsdatascience.com/5-step-guide-to-generate-3d-meshes-from-point-clouds-with-python-36bad397d8ba

I do not understand how the normals generation process is sensitive to material and reflectivity. The point clouds I am getting from the Lidar guys have no material, just color for the points. The reflectivity may be the same as what they are calling intensity but I do not see how this enters into calculating normals. My approach to computing normals is to take the cross-product of two vectors constructed from the sides of a triangle defined by 3 cloud points that are adjacent to each other. Finding 3 adjacent points that are not too close to collinear so that good a good normal can be computed is the heart of the problem. For clouds with 10-1000Mpts you need to find a better way to identify adjacent points than using conventional search techniques, even those that run in n x log n time, if you want to find the normals in minutes rather than hours. I think this is possible for some cases.

In terms of gaming engines, these employ the GPU hardware to do the heavy lifting which now can provide 10’s of teraflops of performance for ripping thru the calculations. Rhino makes use of the GPU in this way to implement a clipping plane. A clipping plane removes points above the plane so you can get a cross-section view inside a building. Even with 200Mpts clouds, the rotation/moving/tilting of the clipping plane to show a difference cross-section is very fast. I looked at CPU & GPU activity while rotating the clipping plane and could see the GPU hitting over 85% activity while the CPU was doing next to nothing. This is a great example of GPU benefit for point cloud display. I would imagine your tools have a responsive clipping plane function also.

What I am not working on is using the GPU hardware to do massively parallel normals computation. I have yet to write a program that uses 1000’s of CUDA cores to do parallel calculations. This might be a route to faster normals generation. Mabe some of your tools already use this method. It so it could be a marketing bragging point. There are several articles describing this approach if you search for: CUDA cores for computing normals of point clouds

So I will continue working on my point cloud Python/C++ code to find good surface normals quickly.

Hopefully the weather will warm up for you soon. So sad to see Texas in so such trouble due to the weather. Hopefully lessons will be learned so there is no repeat in the years to come.

Regards,
Terry.

1 Like

Intensity measures the return strength of the laser pulse that generated the point. It is based, in part, on the reflectivity of the object struck by the laser pulse. Reflectivity is a function of the wavelength used, which is most commonly in the near infrared. Each material will produce a standard pattern and density. Look at a point clouds of concrete, brick, metal objects and etc. They each have their own characteristics which are pretty standard once you have come across enough of them and it only made sense in the same way we treat photogrammetry point clouds to separate in order to deal with different aspects of the project most efficiently. I.e. we treat a concrete tilt wall completely differently than we treat road base.

The reference to the gaming engines is more along the lines of visualization which will greatly aid the loading process enabling you to view larger point clouds more effectively. The GPU function of analyzing the point cloud editing software does is purely number crunching.

Nice! Do you have any articles you would recommend?

It was very eye opening but we were very blessed compared to allot of others. We stay prepared with rations and even though our municipal water supply had a glitch we never totally lost water. Obviously the Texas infrastructure never dreamed that we could experience something like this. I have lived in Central Texas for 20+ years and it has snowed no more than 5 times. As of mid-December it has snowed 3 times and been freezing temperatures for a solid week.