To clarify the decimation of the mesh (the .obj file): this is a process done after the rest of the photogrammetry pipeline has completed, so has no impact whatsoever on the point cloud or elevation model - those are not downsampled at all, and there have been no changes made to these outputs in recent weeks. As you mentioned @MichaelL, our understanding thus far was that users wanting to deal with massive datasets, or performing detailed measurements / analysis were doing so on the elevation model or point cloud, rather than the 3d mesh, and that the 3D model was used mainly for a qualitative view of the scene. As such we optimised the file size of the model and textures for ease of display and interaction.
In terms of decimation of the 3D model and texture, we have always decimated the model and reduced the texture size to make the 3D model load efficiently in the browser, again our assumption being this was the primary use-case for the 3D model. The model texture downsampling has been the same for several years, and the texture files packaged up in the model.zip have always been downsampled (so there has been no change to them in the last week). The decimated mesh has always been the one we display in the browser, so there has been no change to the model quality you see in our UI.
We have done a fair amount of internal comparisons on the original full mesh and the decimated one we display in the UI and provide via export, and have found very little qualitative difference (certainly no introduction of major artifacts). That being said, there could well be an exception, which we would want to investigate and fix. @Charles_Trindade could you send me a link to the models you linked in your post and we can take a look on our end. We are continuously working on improving our 3D model quality (this is the primary goal of our dedicated photogrammetry team), so really appreciate any specific feedback or examples we can get.