Why is photogrammetry more demanding that autogen?

Well that’s going to really suck for the Xbox experience.
2 or 3 years takes the Xbox about halfway through its lifecycle. By 3 years they will already have started designing the hardware for the Xbox version which replaces it.

Because photogrammetry produces much, much more 3D data (“triangles”). Why? Because photogrammetry is “error-prone”. Let me elaborate.

A simple wall (of a buildling, for example) which in generated data could be represented with only two triangles (4 vertices) and a texture (“the wall surface”) in photogrammetry you could end up with many more triangles. Because interpreting the photographed pixels leads to errors, and a wall might not seem entirely flat due to imperfections of the photos and there thereof reconstructed 3D data.

And of course the photogrammetry buildings are much more detailed per se: the AI generated buildings are built using “primitives” (“building blocks”), and they are usually much simpler than reality.

In short: having way more 3D data in photogrammetry is both a wanted property (“high details”) and an (unwanted) side-effect of “imprecision” (which can be somewhat counteracted by “smoothing” / “filtering” the 3D mesh data, but at the costs of precision again, e.g. square buildings may all of a sudden have “rounded” edges etc.)

4 Likes

No. That’s not the reason. Once it is fully downloaded you still have way, way more triangles (“meshes”) than with artifically generated buildings. Why? See my previous reply.

2 Likes

Oh and that reminds me: with “artifically generated geometry” you can “re-use” existing 3D models (“entire buildings”, or “building blocks” such as roofs etc.). The keyword here is instancing (first random link to topic: LearnOpenGL - Instancing - this is about OpenGL, but of course Vulkan/Metal/Direct3D also know the concepts of instancing).

What does that mean? This means that you create one instance of a “geometric object” (say, a “house” or a “car”) in the video RAM (VRAM) of your GPU, and when you want to draw it a thousand times you can do so with just one draw call (instead of 1’000 draw calls) - by referring to that “instance” of your geometry.

And like this you can greatly increase the “draw call” throughput. While you can rotate and scale the individual instances (and hence give them some “individual appearance”) the concept of instancing of course only works for otherwise identical geometry objects. Because of course all the instances refer to the exact same “root object” (the “instance”).

With photogrammetry on the other hand every object is a distinct object, with associated textures. So basically you have one huge “mesh” (in various levels of detail), and you need to “squeeze” all this 3D data through the GPU. Simply said.

4 Likes

Autogen is not picking from an existing library. It takes in account the shape of the building. and generates the building according to the footprint of the building.

Or how do they do footprints like this:

It gets it wrong in places, where a tight knit collection of smaller buildings is mistaken for a single larger one.

Instancing only works if the vertices of the objects be rendered are the same, i.e. the exact same object. The houses that autogen draws vary a lot in their appearance. You sure this is instancing that they are doing? I would rather guess they have tons of draw calls for the autogen cities and that this is the reason why DX12 will be beneficial.

Trees and grass are definititely instancing, but houses/buildings might not be from what I understand.

Yes, but it still generates these assets from a limited library of textures and models.

i.e. they may have generic “british suburban house” type that has a few relevant textures, such as bricks, roof tiles and windows.
These are then reused across all of these buildings, simply adjusted in size and shape.

3 Likes

Well I was more referring to having full max settings, high traffic, mods you like installed, ect. Sim normally runs mostly ok with medium/high settings (except the issues with going into some major population centers like KATL which I am sure is bugged) XBOX users should have a pretty decent solid experience, it just won’t be ultra maxed like we will be able to do with powerful PC. XBOX ain’t no chump though. They did pretty good on this generation of consoles. Curious if they come out with pro versions here in a couple, 2-3 years. Exciting time to be a gamer honestly…if you can find the hardware.

That’s exactly what I meant :wink:

No, not all all sure - that’s why I was also talking about building blocks (or “elements”), as opposed to “entire buildings”. E.g. certain types of roofs could be instanced, or windows, or… but even that is just an educated guess at best.

I was just saying that generated geometry could profit of instancing (as opposed to “phogogrammetrically generated mesh data”). But I am pretty sure that Asobo is using instancing somewhere, e.g. cars (or even “generic aircraft” come to mind). And of course “grass” and “trees” that you’ve mentioned.

My answer was hence just an attempt to answer the question of the OP why generated (“artificial”) mesh data has certain performance advantages (“you control the data”), but also disadvantages (“not as detailed / good looking as photogrammetry”). Respectively in the end it is also simply a question of “who (how many artists) is going to model each and every individual house” vs “let the computer / algorithm determine the 3d objects” (= photogrammetry).

But we can all look forward what else might come to light, with further “AI generated environments” improvements (and a larger library of “building blocks”, more refined “building rules” (“no trees right in front of runways” ;)) etc.

Btw. a friend I know from my studies founded the company Procedural, with their product CityEngine (Procedural is now part of Esri): CityEngine is basically a way to automatically create (artifical or “real looking”) streets and entire cities, based on such “building blocks” (“primitives” such as doors, roofs, chimneys, walls, and all those architectural “style elements”, …) and especially “rules” (e.g. a three stories building must have a chimney", “in this area the buildings are of commercial type” etc. and what not):

CityEngine was (is) also used in various Hollywood movies (in preproduction, but also in some actually generated visual effects in the final movie). :slight_smile:

Probably. And cars quite possibly.

UPDATE: For those who are interested in this “rule (procedural) based modelling”:

(“the basics” start at the 00:8:15 timemark)

1 Like

Thanks, this was exactly the kind of detailed response I was looking for. Obviously, I know photogrammetry is much more demanding than auto-gen, I just couldn’t understand the reason. It was one of those curious questions to me that I wanted to know, kind of like why is it freezing on mountains when they are closer to the sun, or as heat is a byproduct of electrical energy how can an electric fridge produce cold? I know the answers to these questions, just using them as examples of other things which we take for granted but the mechanism might not be obvious.

Regarding triangles, does this mean that in the future Nvidia’s mesh shader technology might be used to increase the frame rate exponentially?

For people unfamiliar with mesh shading this video explains it and if you have 3D-mark they have a great benchmark of it. It basically increases your fps 10 fold, from 40 fps to 400.

Maybe it will help, but it’s RAM, CPU, bandwidth as well that come into play. It all starts with the source data that needs to be optimized first.

Some landmark (handcrafted) buildings in between PG buildings:


The PG buildings are all unorganized triangles, 10x more work to render than the landmark buildings. Plus you have random artifacts left over with PG data like that ‘bird’ flying in the sky.

It makes more sense to do all that work server side than have the client trying to make sense of the data. But more efficient rendering is never a bad idea.

1 Like

There are a couple of reasons for this, what I have found is what I believe to be one of two things;

Either a. the loading in of any cloud based scenery requires CPU usage that is tied to the main thread and will cause a ‘stutter’ if it cannot download said scenery in a timely manner. This is because the mainthread is ‘paused’ until it receives the data that has been requested and loaded in the scenery. Obviously, photogrammetry is ‘heavier’ to stream than the standard scenery + autogen so this effect is more noticeable with photogrammetry. There is nothing you personally can do but I have a thought that the developers may be able to think about below.

b. the loading in of scenery requires CPU usage whereby your CPU is already maxed out in several threads. This can be remediated by buying a better CPU. I don’t believe the stutters that photogrammetry loading in is caused by a GPU restriction (you can confirm this by looking at the developer FPS overlay).

I believe if the problem is A then the developers need to develop some sort of prioritization whereby scenery is a lower priority thread and does not hold up the main thread if it cannot load in quickly enough. A good example of this is if you play on a simulation rate of more than say 4x - you will notice a lot of stuttering caused by scenery trying to load in. If the simulator can prioritise the rendering before loading in the scenery you would end up in a position where it remains smooth but the scenery does not load in (i.e. the priority of loading scenery is reduced). What this MIGHT cause is that you see lower quality photogrammetry because your CPU cannot load the higher quality assets in a timely manner (essentially causing option B above).

I find this akin to how Android and iOS handled web page rendering - for anyone that has noticed this themselves. If you got back a few years when scrolling a webpage on Android it would always try to render the entire page so as you scroll you could see all of the content - the problem is the scrolling animation would be noticeably stuttery (even on high end Android phones). On iOS you would get the opposite issue - the scrolling animation was always buttery smooth but if you scroll too quickly you get the ‘checkerboard’ pattern while it tried to load in the graphic of the webpage for a moment. The scroll animation remained smooth all the way through though. Currently scenery loading is using the ‘Android’ approach hence why there are notable stutters when scenery is loaded in.

You can prove this to yourself by looking at your FPS whilst the scenery loads in (i.e. downloads) and compare it to after a few seconds once it has already loaded in. CPU usage drops and the FPS stabilises after it has loaded in.

1 Like

Although greatly improved in FS2020, autogen technology has existed since FS2002 and computers of the time could handle it. I ran a few mods on FS2004 that basically placed autogen objects roughly in line with real world structures and that’s all we’re really doing with it now.

The thing about Google photogrammetry is it, too, runs like a slideshow when imported to MSFS. And this is with all of the data stored locally. And for what it’s worth, they’re always taxing my MainThread, not my GPU (3090).

1 Like

Did anyone mention shadowing. Google Earth doesn’t bother with that. Huge hit.

2 Likes

That benchmark is specifically designed to only stress that part of the rendering pipeline. Real world application won’t see anything like that, I’ll eat my hat if typical percentage increases are more than single figures.

1 Like

Still nice to dream!

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.