Okay, I followed the instructions from Raptor4400, on creating a manualcache.ccc file by copying the rollingcache.ccc file. Here’s my report on how it went.
The procedure does work, but there are some limitations:
a) you can only “create” a new ManualCache.ccc file in the primary MSFS Package directory [C:\Users\YOUR_NAME\AppData\Local\Packages\Microsoft.FlightSimulator_8wekyb3d8bbwe\LocalCache\MANUALCACHE.CCC]. If you create it in any other folder, on any other drive or partition, MSFS won’t be able to access it.
b) You can’t increase the size like you can the rollingcache.ccc file, at least not within MSFS, so you better create it as big as you will ever need. It may be possible to add extra blocks of empty data to the end of the file with a hex editor, but I haven’t tried that yet, and doing it that way might cause MSFS to simply reject the file as “corrupt” or something.
c) The scenery at the edges of the area you define to cache don’t necessarily merge smoothly into the AI (default) scenery. Especially if there is water (the ocean, a bay, or a lake) beyond the edge of your cached area, there will be places where the cached scenery tiles are visible in the water as squares of a different color and elevation. I’m not sure what would happen if you were in a float plane or amphibian and hit one of these edges on a landing or takeoff roll, but if the elevation difference is big enough to see from hundreds of meters away, it might be big enough to crash the aircraft? When landing or taking off from glaciers in a ski-equipped plane, I have sometimes hit abrupt changes in elevation and crashed, so maybe the same thing can happen where the edges of cached scenery tiles exist in water?
As for visual quality: I downloaded at High Quality, all of the area of San Francisco south to just below the KSFO airport. From 1000’ AGL or higher, it looks really nice. But from lower down, 100’~500’ AGL, in a slow plane like the Cub Crafter NX or the Zlin Savage or Shock Ultra, the merging of the photogrammetry with the AI model isn’t so attractive. The colors and patterns of bare ground and vegetation (soil, grass, parkland, trees etc) is greatly improved, but in urban areas, it’s not so good. In San Francisco, downloading and caching the photogrammetry does get you a bunch of landmark buildings that aren’t in the AI scenery, like Coit Tower, the Transamerica pyramid, the Twin Peaks TV tower, and the Palace of Fine Arts, but there are downsides:
The facades of practically all of the buildings look terrible at distances closer than 300 feet, as if they had been painted with mud. Window frames are crooked or twisted, and the modelling of structures on steep hillsides is bad, really bad - buildings on Telegraph Hill, including the base of Coit Tower, look nothing like buildings, but more like Salvador Dali painted them while on a bad acid trip.
Fly close enough, and it can be plainly seen that trees are sprouting right out of the roofs of buildings. The algorithm obviously isn’t smart enough to realize that tree trunks are rooted in yards, or out on the sidewalks and in the road medians, not in the middle of the buildings.
Objects that the algorithm can’t interpret, because they were maybe unclear in the aerial photogrammetry, are sometimes depicted as weird geometrical shapes like pyramids, that look completely unnatuaral and out-of-place.
Along the land-edge of my download, in a strip about 3 miles wide between San Carlos and the KSFO airport, running the whole width of the peninsula from SF Bay to the ocean, the photogrammetry is low quality, and looks worse, WAY worse, than the AI scenery. I don’t know what to do about this, if there’s even anything that can be done.
My download, covering San Francisco and the north portion of San Mateo County down to just below KSFO, took 11.5 gB of data. Obviously, caching large areas, like all of Death Valley National Park for example (2.8 million acres), would probably be impractical except at Low quality; if you tried to cache an area that large even at Medium quality, you might need a cache file of several terabytes. I’ll try to quantify just how much data it takes to map, say, 10 square miles of non-urban land at Low, Medium, and High, and post the numbers later.