Some of the videos I have watched on this say the claim is bold and has no real test data to back this claim up. However, if there is a shred of credibility behind any of these claims then this will be massive for the hardware industry. Moreso with the rumour of RDNA 5 being AMD’s first Multi-Core Module GPU.
This could have a large impact on the sim due to one of the biggest complaint being that MSFS doesn’t take advantage of all cores very well.
Toms Hardware is the original source, but credit has to be given to RGT, where I spotted this from.
My BS meter is fully pegged in the red zone here but I so hope I am wrong.
Also, the 100x claim requires recompiling code with their compiler. Without that, they only claim a 2-fold increase in performance (which is still outrageous). If really true, I’d expect Intel or Apple to buy these guys out as soon as they can verify their claims.
Yep, thats exactly the opinion of RGT. I wont dismiss it completely as it is an issue that has bewildered a lot of gamers for along time. Why cant they get to grips with multi-threading processes. The real reason being down to the multitude of variations of CPUs that are possible. You would literally have to develop many variations of the same code to compensate for extra cores/threads. If this gismo can examine code and streamline the process on a machine by machine basis, then this is a game changer (Pun not intended).
The 100X claim is mickey mouse territory imo. I fail to see how you can make the 4-6 threads of MSFS suddenly become 20X more productive with the remaining 26 threads in the best case scenario. I could easily see 2 fold increases on code that is recompiled, and that is huge. On code that is changed on the fly, will there be latency issues?
One thing that has not been mentioned so far, is the issues that are inevitable with copyrights. Does this break the TOC of a lot of games? Would game devs have to accept a license to make use of the features. The security of any stored data that has been recompiled, remembering that this will be ran on some games where account information is used.
The piece mentions 256 core CPUs, which is high end data centre level. These type of companies tend to write their own code to stream line pipelining. I doubt there will be as much if any benefit there. I also very much doubt they will be interested in something like this until they know all the pitfalls.
I believe the real reason is that parallelization is just plain difficult. It’s a lot easier when you have just a clearly defined, static data input and a set of specific algorithms to apply to that data, which is what happens with dedicated scientific data analysis and modeling on supercomputers.
It’s a whole different animal when you are talking about something like MSFS where various, separate tasks need to synchronize toward an aligned audio-visual output occurring dozens of times per second, all of which is dependent on real-time user input from many sources plus Internet-streamed data. I cannot see how any form of hardware gizmo can parallelize this better than deliberate code design.
That’s not how compilers work, they don’t compile user data. Also, recompiling requires the source code or LLVM code from the developer so for MSFS this means Asobo would need to be using this compiler within their tech stack.
Code can be written with parallel processing in mind. The variation of the hardware cannot not be overcome without a lot of work. This is why I think this solution has merit with the process they are taking. If this system can rewrite old code to work better, then it falls into holy grail territory. We would need to see it working to make any conclusions.
Except this is not compiling originally data. It professes to recompile any data sent to the CPU, that will include sensitive information.
That’s not really compiling. That’s just promising a CPU whose microcode handles execution of application code in a way that takes better advantage of available cores.
Multithreading something like video encoding is simpler than MSFS. You just take each block (usually 8x8 pixels) or per frame and then spread it out across multiple cores. MSFS is a whole different animal.
Yep I know how it works, along with pitfalls of doing this with complex meandering coding. I am just as sceptical as everyone here. It is the equivalent of breaking with 15 pool balls and expecting to be able to calculate how all will fall into the same pocket in sequence.
However, this seems to me to be the way to go in theory. Assessing code on a case by case basis is the only credible solution to making cores run to their full potential. Being able to do this on the fly without introducing a latency cost seems a bit Walter Mitty to me. The final paragraph of the article sums it all up.
For now, we are taking the above statements with bucketloads of salt. The claims about 100x performance and ease/transparency of adding a PPU seem particularly bold. Flow says it will deliver more technical details about the PPU in H2 this year. Hopefully, that will be a deeper dive stuffed with benchmarks and relevant comparisons.
Where the next details to be revealed will be technical information, not prove of concept. Benchmarks are only a case of hopeful not a firm commitment.
Always wait for independent reviews before spending your money on anything.
It needs to show credibility before anyone would consider that. They are claiming later stages of development. I would be very surprised if AMD, Intel or Nvidia had not already been in touch. This could all just be a propaganda stunt to push the price up.
Either way, with the introduction of more proficient AI, I just thought that it is a development path to keep an eye on. RDNA 5 is reputed to be the first generation of MCMs in the GPU range from AMD. AMD must feel they have cracked the interposer required to achieve this.