Difference between 2 of obj files when exporting 3d model: textured and decimated_textured?

Hello,
What is the difference between the 2 obj files exported in a 3d model .zip file? One is named “scene_mesh_textured.obj” and the other “scene_mesh_decimated_textured.obj.”

Thank you

When I download the obj model I have 2 models, which are almost exactly the same size. They both have the exact same number of vertices and faces. What is the difference?

I assume you are downloading through DroneDeploy? What plan are you on?

Yes I am downloading through drone deploy. I am on explorer plan.

The decimated version sometimes has fewer faces and, for the Crosswinds example I got from Michael, is generated by the free Meshlab program (available here: http://www.meshlab.net/#download). Both versions have the same number of meshes (9 in the Crosswinds example) and reference the same texture files. Below is a closeup view of an elevation map of the decimated Crosswinds model that has 4M faces:
Decimated
and here is the non-decimated model with 5M faces:
NotDecimated
If you look closely, you will see a little more detail in the non-decimated model.

These are large models of 300MB or more. For smaller models, the decimated and non-decimated versions are typically not very different.

So if you always want the most detail and are not worried about the model size, then use the non-decimated model.

Regards,
Terry.

Thank you for the response and illustration, Terry. I am using Meshlab to decimate the models from DroneDeploy so I can export them to Collada (.dae) files and import them into Google Earth. So I do definitely have to reduce the size of them for GE to load them. The 2 models I get from DroneDeploy have exactly the same number of vertices and faces, so I was curious as to why DD would choose to include a second waveform file titled “decimated” that is identical to the other one.

It sounds like you are saying that the obj titled “decimated” is only different from the other obj if the model itself is “large,” and at whatever threshold that is DD kicks in and decimates it for you?

I’m also curious to know the quantitative and qualitative boundaries/methods DD uses when (automatically) decimating the file- as in: what % reduction is employed? what method? does it affect the topology? etc.

Thanks!

It would have to be a pretty large project with allot of detail before there would be a large difference in file sizes. As they come off DroneDeploy they are already decimated as are the point clouds. I have been discussing this because from my observations we are on the Enterprise plan and are only getting about 50% of the points at best. Some models I have tested were many as 5 times the number of points from Pix4D. Their current decimation is similar to, but not exactly a grid factor. Looking at the point clouds you can see this consistent spread.
A heads-up if you didn’t see the announcement, you need to download your data before the Explorer plan goes fly-only.

Thanks MichaelL. I an contemplating a subscription while I look through these abnormalities. I have also compared to Pix4D but the point cloud difference could be related to the resolution of the photos that the process actually draws from, including densification. (I’ve been using a program called Regard3D to test out the different processes in creating models. There is a description how those are used here.)

You wrote:
As they come off DroneDeploy they are already decimated as are the point clouds.”

Are you saying that all the meshes and point clouds are already decimated when they come from DroneDeploy, so that’s why the decimated and "non"decimated are the same?

The “decimated” file that comes from DroneDeploy has a header that says __# OBJ File Generated by Meshlab and has different headers and columns than the non-decimated one (but in my case, is the exact same result).

See examples below:
Decimated

Inkeddecimated_LI

Simple "Mesh_Texture"

non_decimated

I never use OBJ files so some of your analysis is beyond me, but it all comes from the point cloud and that is what I know is decimated. Example was a 50-acre project that Pix4D pulled 40m points with 4 points of reference and DroneDeploy had 8m. I don’t know what scale of the photo or how many points of reference DroneDeploy is using, but my tests vs Pix4D were with 1/2-scale. I always assumed they were using full-scale, but I am not so sure any more.

Thanks MichaelL,
I am not an expert by any means on the 3d modeling software. I don’t understand what you mean by “it all comes from the point cloud and that is what I know is decimated.” Is it possible to decimate a point cloud?

It sounds like the triangulation method is different between DroneDeploy and Pix4d. Or it could be a keypoint matching ratio or sensitivity difference (or TMBR). Without a definitive workflow it’s hard to know. I am curious as to why the header for the “decimated” obj file contains an attribution to a 3rd party software…

The point cloud is the primary product of the photogrammetry. You can stitch images together without photogrammetry, but it’s not a true orthomosaic as we have it. The raw point cloud is the truest and most accurate form of any of the data we get from any processing solution which is why it is what I do all my work from. When you populate a point cloud into a piece of software one of the first settings you will come across is point thinning or decimation. This is because processing of a point cloud is a purely mathematical process and is very intense on computers so most users need to decimate in order to not run out of memory immediately. Through an algorithm the processing determines which points are the most important then it only loads whatever factor you put in. If you enter 10 then it only brings in every 10th point.

The triangulation of these points is the basis for your OBJ and the texture is merely draped over that triangulated surface. In CAD modeling (since the 80’s) we speak in terms of trifaces or a tri-mesh so we now see that in all modeling now.

The keypoint matching ratio or simply the number of matches required to create a keypoint is the most important factor as to how many points your cloud ends up with.

As for other software labels, no current software is the true author. It has all been developed for years and years so if you dig into their code you will see traces of the others. Even the high and mighty Pix4D was the result of work at another lab… and then you have.

image

If anyone is interested…

http://ibis.geog.ubc.ca/courses/geob373/lectures/Handouts/History_of_Photogrammetry.pdf