Why is photogrammetry more demanding that autogen?

Oh and that reminds me: with “artifically generated geometry” you can “re-use” existing 3D models (“entire buildings”, or “building blocks” such as roofs etc.). The keyword here is instancing (first random link to topic: LearnOpenGL - Instancing - this is about OpenGL, but of course Vulkan/Metal/Direct3D also know the concepts of instancing).

What does that mean? This means that you create one instance of a “geometric object” (say, a “house” or a “car”) in the video RAM (VRAM) of your GPU, and when you want to draw it a thousand times you can do so with just one draw call (instead of 1’000 draw calls) - by referring to that “instance” of your geometry.

And like this you can greatly increase the “draw call” throughput. While you can rotate and scale the individual instances (and hence give them some “individual appearance”) the concept of instancing of course only works for otherwise identical geometry objects. Because of course all the instances refer to the exact same “root object” (the “instance”).

With photogrammetry on the other hand every object is a distinct object, with associated textures. So basically you have one huge “mesh” (in various levels of detail), and you need to “squeeze” all this 3D data through the GPU. Simply said.

4 Likes