I never use it, 1gb connection, not required.
I also think there is a chance that using the rolling cache saves no data. During a server outage I expected that MSFS would load the textures from the rolling cache, since the server wasn’t available. But that didn’t happen, and all I got was FSX generic textures until the server was up again.
You never know how much rolling cache really get used.
As i have the space avail, i am allocated 1,5 TB, it certainly too much, yet it doesn’t harm you in any way and if you do the math the traffic adds up. As i fly some locations quite regularly…
Sure it will maybe take months to fill it up but… yet i wouldn’t use the Free Space anyways. And it’s much better to fly big cities precached…
Cheers!
Rolling cache saves only photogrammetry (basicaly all good looking big cities) -
as far as i know, normal satellite imagery isn’t covered.
If you want to play offline you have to use the static cache, yet i am not sure if it works with bigger data sets, previously it always slowed down the UI when you used it extensively. Didn’t used it for months, so no clue if they fixed that.
I am using an HDD for the rolling cache, with ReadRates of around 10MB/s it is absolutely enough. In my view SSD usage makes less sense as if youre using only a small cache size the SSD will be rewriten pretty often and SSD don’t like that all too often…
Anyways great differentiation, wasn’t aware that folks confuse that.
Fo some strange reason, if I pause the sim (in VR for about 30 min) and then come back, the micro stutters vanish and it becomes Chrystal clear…. Anyone else seen this?
ddr4 transfer speeds can be up to 20GB/ps which is why I am very happy to dedicate 4 of my 32GB to a ramdisk cache. Not only can it save against slow or fragmented harddrives but it can also act as a buffer when the internet skips a step. Any popping in of bing objects e.g. handcrafted scenery happens further away and thus is less noticeable. It is possible to create a iso on system shutdown to load on restart but I prefer to just reallocate the cache’s size in options, it takes just seconds and always keeps things fresh.
… Oh! and did I mention panning? If thats your problem this “may” help.
It’s clear a lot of people don’t have a clue what the caches do. The Rolling cache does indeed store satellite imagery. Manual cache stores photogrammetry only. If you disable your internet connection AFTER you have launched app ONLINE (and therefore went through validation process) you CAN fly into an area you’ve cached and see great aerial imagery. If you quit out of MSFS entirely and remain OFFLINE and then re-launch the sim in offline mode and then try to fly into that same area (that’s been pre-cached into Rolling Cache) then you WON’T see great imagery … you will get the generic landclass textures.
Verify this for yourself. The cache system works but gets broken by Microsoft’s inability to validate and authenticate the app. PLEASE fix this Microsoft! (I blame Microsoft because I’m sure Asobo intended the caches to be fully functional OFFLINE).
Aha! thank you so much for writing that for all of us. I wasn’t entirely sure what affected what myself.
I wouldn‘t worry about the lifetime of your SSD too much when being used as a cache. While it is true that SSDs have a smaller lifespan than HDDs in practise that doesn‘t really matter „for private use“.
Because:
- First of all, the mentioned (small or large) size of the cache does not really matter: SSD comtrollers apply complex algorithms to „level out“ the written memory cells. That is, they try to distribute all writes on all memory cells evenly (over the lifespan of the SSD), by reshuffling data around (if necessary)
- Second, he German computer magazine c‘t comducted a longtime stress test in 2016, by constantly writing random data onto 12 different 250 G SSDs until they literally died. The strongest „pro model“ lasted pretty much one year, or 9.1 Petabytes (!) of written data - over 60 times more than what was guaranteed by the manufacturer. But even the „cheap SSDs“ exceeded their guaranteed „terrabytes written“ (TBW) by a factor of 2.5.
The article is in German: SSD-Langzeittest beendet: Exitus bei 9,1 Petabyte | heise online
Bu the summary is basically: the average computer writes around up to 40 GB per day (very generously counted - unless you are a professional video editor), which takes 5 years to reach the guaranteed TBW - but as the test has shown even the cheapest SSDs easily exceed that value by a factor of 2.5, or in other words: „You are more likely to buy a new computer (with a new/larger SSD) before your SSD will die.“
Exitus after 9,1 Petabyte?
Only a crazy 19th century scientist could bring this drive back to life:
But joke aside what happens if you have for example a 1TB NVMe with 40 gigabytes of free space (let´s assume the rest of the Terabyte are installed games and therefore more or less read-only with no reshuffling data all over the drive´s NAND cells possible), and the computer is using only the free 40 GB for caching in an insane amount daily.
These 40GB will surely get damaged flash cells after 2-3 years of usage.
Will the whole NVMe drive be lost in this case - or only the amount of space that was in heavy caching and rewriting all the time but with the rest of the drive fully okay?
I am no expert when it comes to how SSDs work exactly, but it is my understanding that there is no „read-only“ usage for SSDs. Every memory cell is subject of being moved to another cell - exactly in order to write to each cell equally often (on average).
Even if your SSD capacity is completely full the SSD controller is still able to reshuffle data around. Why? Because most (all modern?) SSDs have a „hidden capacity“ that is not „visible“ (usable) to the operating system. Only the SSD controller can use that hidden „swap area“ to move (swap) data around.
And yes, those algorithms are insanely complex (and need to be „error-free“ and well tested - valuable data is at stake here!), and are probably also a huge price-differentiator (cheaper SSDs probably have less sophisticated algorithms).
And yes, there‘s a thing called „write amplification“ (also mentioned in the linked article, I believe): writing 1 MB of actual data may end up writing 2 (or even more) MB of data on the SSD. Exactly because data may need to be „shuffled around“. And it makes a difference whether many smaller files are written (and updated), or fewer larger files (of the same total size). Because SSD memory cells have a fixed size, so writing less data into a cell still occupies the entire cell. So if SSD space is running out those smaller data fragments also have to be reshuffled and „compacted“ (?).
So yes: it‘s complex
But again: for „private usage“ the lifetime of an SSD should be of no concern. But yes, it is my understanding that „oversizing“ an SSD (e.g. buy a TB, even if you only use 512 GB) helps in increasing its lifetime (and potentially also its performance, due to less „write amplifications“ - but I am probably simplifying here).
Again, write operations are not „local“: they are distributed over all memory cells equally (on average). At least that‘s the goal of those „write algorithms“.
But yes, I believe that once a single cell fails the entire SSD is „dead“ and cannot be read anymore by the operating system (unlike HDDs, where only the affected area cannot be read). Why that is I don‘t know exactly (probably because the controller simply refuses to continue its work, as the „bad data“ might get swapped with other data, and the other data would get invalid, too).
Eventually the drive would start to lose capacity as those pages which are no longer working properly are mapped out. Your 40GB pool would slowly shrink in other words.
I believe a process called wear levelling would be used to ensure the writes are spread across the disk, but in your scenario you only have that 40GB chunk “usable”, as it were. It can only do so much.
No such problems with a ram cache. For all with 32GB plus I actually think MSFS should build a small, clean ram drive cache at program start as a mandatory feature or at least flush the existing rolling cache to eliminate problems.
I was doing exactly that for a few months after SU5, but I found that I was getting consistent CTDs after flying over 3500nm (real time or time accelerated) which stopped completely when I turned off the rolling cache. My suspicion is that the 4GB cache was filling up with the amount of scenery data over that distance and was CTDing when trying to wrap around back to the start. I reported this issue during the SU7 public beta phase.
As I mentioned earlier in this thread, with SU7 I noticed no performance loss, smoothness reduction or longer load times with rolling cache turned off, so there it has stayed. I only tried a rolling cache on RAMDisk because of the scenery pop in issue that SU5 introduced, but that was rectified in a subsequent hotfix, so I never really had cause to keep the RAMDisk rolling cache anymore anyway.
A single file rolling cache (FIFO) is a bad idea in theory and in practice. In order for it to work and to maintain a static filesize without keeping the entire cache in memory, the game has to constantly arrange the contents of the cache file as new scenery is added. That is very inefficient and will result in IO activity that is directly proportionate to not only the amount of scenery you are downloading, but the size of your cache file. If for whatever reason you want to use a rolling cache, I would go with a relatively small filesize. Using the manual cache makes much more sense.. Too bad the interface is so laggy and inefficient.
What would be nice is a regular old simple cache folder.. Just a basic folder with cached scenery files that accumulate and can be cleaned out every once in a while. Most of us don’t need it to be a FIFO queue packed into a single file.
So why we do not have an official reply from one of the devs in this matter ?. That will be nice and help clarify this issue.
Maybe because I have fast ram and NVme but I honestly have zero issues with a 4 GB ramcache, I also don’t save it on reboot so any issues after updates don’t arise. The SU5 hotfix only fixed the blindingly obvious, take a trip over London, New York, Paris and with no cache you will still plenty of popping and even some culling, not to mention your PC having to download it again which again will slow the new data.
They fixed it in SU6