After the WU8 I was checking the photogrammetry in Barcelona to see if the helipads can now be used … or if the buildings are still “not solid”. First … the PG buildings are still “ghosts” and you can not land on them …
… however … then I saw this …
… when checking the ship my H145 triggered its emergency floats. So the surface is tagged as water.
And indeed … when watching the upper deck of the ship I could see the “ocean waves” right inside that steel ship body.
Remember that helicopters and most landing pads are not officially supported in MSFS.
The SDK category on the forum is for discussion of the Software Development Kit. I have moved your topic into Community Help Center.
The AI photogrammetry isn’t meant to be “landed” on. In most cases, this requires hand detailing.
From a “legal” perspective I might agree …
… but with the addition of the Volocopter there is at least the “official” impression, that landing on helipads / buildings / horizontal surfaces should be supported.
All photogrammetry helipads that I tried in Tokyo worked as expected … and they did so many many month ago.
I think there is a “bug” in the sims photogrammetry production pipeline, as the results are not consistent across the globe. Besides that I think it should be “trivial” to declare all surfaces (even vertical ones) as “solid” (for autogen buildings as well as PG shapes).
I would be surprised if this were a performance (optimization) issue, as hit detection should be a standard feature of the sim anyways.
… and … well … the main topic of this post is that
“ships are made of water” … which looks like a bug
When I cannot land on AI photogrammetry, why can’t I fly under it?
I have no idea. I didn’t create it.
But isn’t that a bit illogical?
From above I fall through, from the front I get stuck.
Not that everything in FS-2020 is logical …
I am not trying to imply that fixing these bugs will be trivial.
But that “landscape” import-export pipeline seems to have a number of strange “features”. At least the validation or postprocessing code is missing some obvious steps/checks.
So if Asobo figures out what is going on then that would automatically fix … well … most likely all regions where they will be running a 3D scenery refresh.
In some way I would even suggest/assume that there is a relationship to bugs like …
… as those 3D spikes are also an artifact of 3D data mismatch which is not automatically detected by the processing pipeline.
PG relies upon high resolution scans of the terrain but obliques are tough to get from a volume perspective - meaning if you look at PG bridges, they look great from above and sharp slant angles when airborne. But when viewed shallow, you can clearly see the terrain is mixed in with what should be “empty space” below the bridge spans. That’s because if you had to pay for the number and type of digital photos required to PG map an area, aerials are cheaper by the dozen, even at high quality. Putting people and equipment on the ground to get shallow angle and ground level obliques? Not so much. Some bridges can be flown under, but those are the ones that are hand-built POIs.
3D Data Mismatch - it’s the price you pay for such a large data set. Why do we see erroneous building heights? Same reason - OSM is a huge dataset and curation / data integrity checking is a constant battle in data science. Heck, even government mapping is problematic, and that’s a national security mission for some nation-states.
It’s a work in progress.
I would agree that automatic PG bridge detection is really tricky.
But I read the comments of @EnsiFerrum666 in the way that PG data in some cases does “collision detection” (e.g. crash when flying under a bridge) and in some cases the PG triangles are not solid (like the helipads).
IMHO this is what @EnsiFerrum666 was referring to as “illogical” … I would also expect all “land” or “building” 3D meshes to be solid by default.
… but then … the “ship is water” is still way more tricky … but perhaps a “sanity check” which e.g. checks the height above sea level could help to tag the triangles correctly. It feels like the “y” axis is not taken into account here … but only the x-z is matched against some ground layer classification.
My guess is that because PG data is streaming - meaning it’s constantly being fed into the sim pipeline and rendered dynamically by the GPU, it’s “visual data” but not “surface data.” Now, could that be overcome by rolling cache? Maybe. But we all know rolling cache is one of the root causes of scenery data integrity and glitching, so… It’s all very technically imperfect, and we may have to make choices about visualization versus object interaction.
I clearly have no idea how the sim works internally … but the fact that it is delivered via streaming instead of being present locally on the computers drive is not a likely explanation IMHO.
Technically there should be little difference if something gets streamed or is cached locally when it comes to the rendering features … it should mainly be a difference in the speed and reliability of that data being available.
I am not sure what “visual” vs “surface” data means from your perspective. It sound like “texture image” vs “3D mesh” to me. In that case clearly both aspects get streamed, as can be validated if streaming gets disabled.
Rolling cache … is a different story. This is not about “wrong primary data” … but about inconsistent or duplicated data. So different (outdated) data for the same 3D location … or data with “old” features which a new version of the system fails to read/process correctly.
IMHO none of this explains the bug.
“Texture data” is purely “visual” and does not play a role in any of this. The bugs are all in the “surface” 3D mesh + meta data.
The (streamed) PG data will include additional bits for surface characteristics (water, sand, asphalt, solid etc.) to feed the water animation displacement shaders or snow/ice effect shaders etc.
Tokyo and Munich show, that streamed PG data can contain correctly tagged “solid” PG surfaces. Barcelona is different.
That is why to me it feels more like Asobo might not even have its own “3D data validation and cleanup” pipeline … but that they are buying the data from some providers and then this gets written straight to the Azure data set. This seems to be the most plausible explanation to me.
Maybe a test with rolling cache will provide a clue.