Those port cranes need fixing (Spain update 8)

Hmm…exactly how many cranes do you think there are in the world?

See how I specifically said “the most egregious examples”?

Also it only applies to photogrammetry cities, which is very different from the entire planet.

If you use Puffin’s “We Love VFR” mod, he utilizes the current FAA / EASA aerial obstruction database to determine the locations of construction tower cranes. He places a custom crane object with an exclusion mat under them, which will remove the photo-g cranes. It might fix your Paris issues.

Which would entail checking every single one. Though to be fair it probably wouldn’t be an impossible task, just a really laborious, tedious one. You would have to check every city that has PG, then scan places like the docks, or building sites for cranes, and the like. One person could do a city in less that a day, probably a few hours in reality.

Are mesh located in PG Files update ?

Because i have erased all PG files from WU 8 to get more fluidity from big cities airports departure.

I only fly airliner at that moment

I am on Xbox X series

Thank you

I do understand what PG is but Asobo make a lot of hand made buildings (POI) and as I can see they correct tons and tons of constructions when the PG give ugly structures.

Btw I gave a shot at the addon “We Love VFR”. Impressive stuff. While it didn’t correct my harbour cranes, it did add very nice cranes around the cities (beside other things) . So basically those guys create a data base of 350,000 well made objects (region 1-2) for FREE and I didn’t have to pay for it. Thanks.

1 Like

Mesh and PG are not part of what you download via marketplace. It’s streamed.
The download contains handmade scenery and airports and new autogen objects.

But as far as I know you at least need to have the respective world update installed for the PG of that update to be streamed from the servers.

1 Like

Do they? I think they select handmade objects not based on what PG creates but based on what they feel is worth adding more detail, including texture resolution.

They know that there are loads of locations where PG is looking ugly from close up. But time is money.
Given the fact that in a large update usually the number of handmade scenery objects is very small (and usually scattered not inside PG areas), I think they use all resources they have for creating them towards addition of landmarks/POIs instead of everyday objects in countless locations.

Cranes will surely never be renderable with PG. I noted recently after seeing really bad PG cranes in New Jersey docks that an add on for NJ bridges also corrected/ added in cranes in those docks so it’s definitely in add on territory than base sim. I don’t think PG is meant to be pixel peeped at from elevations less than 500-1000ft, having said that some of the cities in the recent WU including Barcelona do look great PG wise.

3 Likes

As my image showed, the same thing happens with bridges. The AI cannot differentiate between the deck of the bridge, and the surface of the water/ground it is over. So the bridge has sides that are textured with the surface as seen from an angle. Ugly, but understandable if you know how PG works.

It shouldn’t need reiterating that Google’s photogrammetry deals with cranes and bridges (plus pretty much everything else) an order of magnitude better than Microsoft’s.

What’s exactly the point to highlight this?
Different data and different algorithms creat different results.
We all know that PG could be better with more money and resources. But would you be willing to spend a lot more money on the base sim just for this purpose? I wouldn’t.

And as long as Google doesn’t create a flightsim that incorporates their data with adequate performance (remember, more details means more bandwidth and processing power required), the comparison lacks a common base.

The post I was responding to claimed it was a limitation of the technology, which it obviously isn’t.

I wouldn’t argue against the other issues you mention.

1 Like

There are different technologies at work, I expect, not a single technology. So yes, it can be a limitation based on the technique used.

Find yourself any PG city with power line pylons on the outskirts. Zoom right in to that, and you will see that you can see the grass embedded in the structure of the object was captured by the cameras as it look through the structure of the pylon.

I think I read that some techniques use lasers rather than optical cameras, so they could yield better results for example.

You give me way too much credit. I do use FAA and other countries’ AIPs to get better results than just plain OSM data… but not for construction cranes. These are not very common in AIP data.
My construction cranes are placed based on construction areas in OSM, so they’re not that accurate.

I’ve seen this topic and some similar discussions and I’ve checked how well port cranes are represented in OSM data and I don’t have any good news. They are all over the place. Some are marked as points, some are marked as rails, but you can’t tell how many cranes are on that rail from the data. And there are many more issues. Basically, they would need to be verified one by one.
So what’s the problem? I do that for radar domes and radio telescopes already. Yeah but there are almost 10.000 cranes just in Region 1.
For now, I’ll pass. Working on We Love VFR takes a toll on my real-life job (no worries, I hate it anyway :smiley: ) and I have a quite clear roadmap for the next months. But never say never.

7 Likes

Here’s an image from Google, showing a ferris wheel on Parker’s Piece, in Cambridge, just a few miles from where I am now.

Hint: the wheel is not solid, and green in real life. In fact I don’t think it is even there any more. The green is because the imagery used could see the grass through the spokes of the wheel. This is the limitation you claim does not exist @BeardyBrun

Where exactly did I claim anything as specific as that?

I’m peferectly aware that Google’s photogrammetry produces ‘solid’ cranes etc, but that’s still considerably better than what we have in MSFS.

I just posted an image that refutes this. It handles it exactly the same way.

Other objects have been touched up by hand, with someone cleaning up the data.

Still doubtful? Have a look at this then.

Two objects crossing a road. The one in the foreground is a lattice work bridge with road signage on it. The one behind it is a road bridge. You will see that the foreground object can be seen under, though the latticework is solid, but the bridge behind has a solid underside other than one small hole.

Same objects seen from a reverse angle:

he images were captured optically, as you can see the surface of the road below the bridge has become part of the bridge.

Culling that is easy by hand, probably not so much by an automated process. You would also need to ensure that the imagery of the surface remained as well, and not just a gaping hole once you remove the bridges “skirt”, if you will.

By comparison, the London Eye has been retouched very well, and only the spokes in its centre can be seen.

To put it another way, there isn’t only a single way to take a photograph, and the method you choose will greatly affect the result. It’s possible that Google take photos not just from the air, but perhaps closer to the ground as well. They may also be using laser scanning for all I know, and not just optical.

Anyone remembers Photosynth?
That project that MS promised to gain all geotagged photos and merge them into 3D objects?
That could have actually closed the gaps of the 3d models in PG since it would contain photos against the blue/grey sky and would make the analysis much better than mostly top-down imagery.

Unfortunately, it’s discontinued.

Anyway, I think we agree that it could be better, and that Google seems to put in more money (resulting in quality) into their PG representation.
But with the current behavior of loading PG details, the bandwidth used (London!) and stutters introduced, I highly doubt we’d be happy with more polygons and higher res textures in MSFS anyway.

1 Like

While I’m sure there are many specific objects which have received special attention, there’s far too much ‘good’ photogrammetry in Google for it to have all been done manually. Your example is simply a situation where the processing has failed for some reason, I’m sure there are lots of those as well.

2 Likes