I seem to be the only one thinking this so I thought I’d ask this question to see if it makes any sense at all.
Does it make sense to have AI upscale and downscale a single image (say 2k) rather than downloading, loading and unloading to/from memory, mipped DDS files that already contain all LODs? Could it conceivably be faster to let AI upscale/downscale at run time? Surely it would take less bandwidth/memory at all stages to work with a single image but does it make sense? Can it be as fast/efficient? What would happen if multiple ‘AI processors’ were running on different cores concurrently?
This is what I hear them saying when they talk about new architectures. A new approach to textures would allow the same or better imagery that would take less space and bandwidth. It technically uses the ‘same assets’ but in a different way.
Any thoughts from those who know more than I?
AI processes are most efficiently run on GPU, and AI model is also needed to be loaded into GPU memory.
It would take significant amount of GPU resources to run AI processes to upscale/downscale some textures. Reduced free resources would result in less available memory for other textures and 3D models and less FPS due to GPU processing units partly busy runnning AI.
Hi. Thanks for the response. I know very little about it but I tend to agree with you that it would not be faster or more efficient and so my interpretation of what I hear Jorg and Seb saying about the new approach is probably wrong. I guess only time will tell as they choose to fill us in more about how they will only stream low res textures for everything except what is closer and therefore needs to be higher res. It has to be something different than whats being done now.
I think all they said is that a lot of the Asobo assets would be downloaded on demand, instead of pre-downloaded. That’s it. Example: a world update would be basically pushed to a server and that’s it. You won’t have to download the ~20GB full package to use it. If you fly around one city you maybe only need 300MB out of those 20GB. Of course there would be local caching.
This is of course aimed at the Xbox crowd more than PC. On a PC if you run out of space you just go to the store and buy a new drive.
To implement this they’d definitely have to do something with their network infrastructure, because it’s clearly not up to the demand looking at the download speeds.
Well, its a bit more than that. Listen to what Seb says at 9:49 about mip levels and only downloading the mip level needed. As I understand, the current system loads all information needed to create everything determined by your chosen Terrain and Object LOD settings and that includes all mip levels for all things whether seen at that moment or not. I had thought that maybe they would use AI to upscale/downscale a single image but now I think it will just be that DDS files will have their individual mips loaded as needed.
I think the AI is nonsense. AI involves a learning process, if there’s no feedback there’s no way it can learn. Not quite sure how that’s feasible with a landscape renderer. I think it’s marketingspeak for “algorithm”.
Unless I’ve misinterpreted your comment, you’re effectively describing Deep Learning Super Sampling (DLSS) the nVidea tech, or AMD’s Fidelity FX. It takes low a low-resolution render and upscales it on the fly using AI. It’s not perfect but it is very effective (DLSS more so than AMD’s offering).
Is this where improved performance in a single screen user environment is achieved by breaking performance in a head tracking/multi screen/VR used environment?
I had originally thought that MS/Asobo had found a way to implement the DLSS approach using software into the game architecture but I think I neglected just how much computing power that would require. I thought maybe they had developed ‘AI processing code’ that could actually negate the need for such powerful modern GPUs as nVidia’s RTX4000 series or whatever AMD is up to these days… and maybe it could even work on already existing XBoxes. But now I really think that all they are doing is just changing the way mip levels are handled which is kinda boring compared to where my mind was going thinking that AI had changed everything and possibly negated the need for mipped files. I was wrong… not the first time
Well, if the terrain is entirely streamed at flight time, then you would get “heavy textures” close to your aircraft, and lighter textures beyond that until you get closer to them. No need for the CPU to load them up and calculate LODs if the server is giving you the required LOD.
I don’t think AI will be involved.
What Seb said was that 80% (made up number on my part, but probably close) of the textures in any given scene never get to the point where full size is needed. So this could be a huge bandwidth saver.
Yes, you’re right if the texture is local, maybe it helps, but, the whole point is, there’s too many textures to keep them local, better to stream small versions of them.
I think what they’re saying is, they are using AI to analyze the downloading across the system and from that develop the best mix of what to stream when.
Like Black Shark AI today was taught and then used to analyze landscape features to build buildings to match what’s in satellite photos (and now trees, too).
I don’t think it would be that complicated. The server knows your location and it only has to send detailed textures within a certain radius. If you fly around an airport with Bing Maps, then turn off Data, and fly straight out from the airport, you will get to the boundary where the cached terrain ends.
80% of the textures right next to you don’t need to be fully detailed. That’s the point of what they learned. It’s only those textures that are in your very, very, very immediate vicinity that have full detail. They very rarely use that level of detail. Even on planes right next to you relatively.
I’m exaggerating some here, as I don’t have access to what Seb used to make these statements. But when you’re at 1500 ft, you don’t need a full 8K texture on the ground below you. Not until you’re standing on the tarmac next to your plane.
That also applies to terrain ahead of you, as it becomes more distant, requiring many intermediate steps so the jumps in quality are not as jarring.