Is Asobo working on a way to remove photogrammetry overhangs for bridges, etc. without having to resort to custom models?

I’m sure that you are aware that any photogrammetry object that is not solid all the way to its base gets “filled in” in the sim. I’m giving three examples (four pictures), below, plus the relevant Wishlist topic for reference, but there are many, many examples of this in the sim.

Right now, the only way I have seen this corrected is to have custom, hand-built bridges, buildings, and structures. While that’s great for those objects, Asobo will never get to all of them. Is Asobo tackling a programmatic way of removing those overhangs?

George Washington Bridge, New Jersey/New York:


Another angle of the George Washington Bridge:


Port of Elizabeth, NJ (near KEWR):


Roller coaster at Universal Studios, FL:


The Wishlist topic for reference:

Note that the new Europe photogrammetry (London, Paris, Brussels, Amsterdam) seems different in this regard; I’ve seen cranes that appear as spindly structures rather than big boxes, at least.

Whether they can or will do anything to clean up the old data in existing areas before replacing it with more up to date data, who knows…

1 Like

Interesting! Jorg did say in a previous Dev Q&A that photogrammetry is going to differ from location to location: Different times of day/year when things were taken and different equipment. Do you have any screenshots of these structures? Admittedly, I haven’t flown around World Update 4 as much as I would have liked to.

1 Like

I’ll take some screenshots shortly! :slight_smile: ISTR seeing shipping cranes in Amsterdam specifically, will check.

Found them; circa 52.41N, 4.88E

2 Likes

Oh wow, that’s vastly different! Goes to show you how different photogrammetry can be!

1 Like

I think this question also encompasses the AI and autogen objects beyond landmark bridges because things like cranes and docks and boats in Marinas, that the photogrammetry does a poor job of, should be easily recognisable by the AI through machine learning because they are common and exist in clusters and with learnable surroundings such as water.
Blackshark or Asobo could then make generic models for port cranes and small boats for marinas, bridges for rivers etc and place them as autogen objects and where an area is photogrammetry edit them in.

This would really bring to life many locations around the globe in one pass that are by the sea or on the water which are plentiful.
Docks with proper cranes and cargo containers instead of flat 3D or mangled photogrammetry, marinas full of small boats not sunken 2D photos, bridges over rivers you can fly under instead of 2D photo and no bridge or solid photogrammetery that has you crash when you try to fly beneath.

Maybe in 10 years the photogrammetery will be good enough and systems powerful enough to run it but that is next gen flightsim stuff.

1 Like

My guess is that different equipment was used to capture the data in different parts of the world. That or they took more data points to create a more accurate model in some cases.

Indeed, I agree! That is why I included photogrammetry docks and a rollercoaster in my examples as part of the first post.

Port of Elizabeth, NJ (near KEWR):


Roller coaster at Universal Studios, FL:

Fortunately, all of the processing power for this takes place on their machines and not ours. I believe – but don’t quote me on this – that they use Azure’s computing power to process all of this data. And, every time they update the machine learning algorithms, they probably have to reprocess some part of the data. What gets streamed to us is the end product.

1 Like

Yes, in a previous Dev Q&A, Jorg talked about that. He said that it’s different cameras, different lighting, different times of year. I tried to find the Q&A, but I couldn’t do it. If I find it, I’ll link it here.

1 Like

Found the video. Sorry, it’s not as exciting as I remember :slightly_smiling_face: (59:00):

What he said:

It’s also not consistent. Not every piece of photogrammetry is the same. Resolution, done at different times, different cameras, different lighting conditions.

But he also mentioned later in the video that the photogrammetry vertex count in London was much higher, and that they released it at a high count because they would have had to delay release to fix that and they already felt bad about the first delay. That caused performance issues, so they had to run it through a script to remove some of the vertices for better performance.

1 Like

Wow, that’s a scene from Inception movie :slight_smile:

Hahahaha!


2 Likes