Cpu question

how do you get it into the first core (is that core 0)
probebly that’s the reason I see not more than 4.7 ghz

open taskmanager - details tab - Flighsim…exe - right mouse - Affinity - deselect core ‘latest’ … the 100% core-load jumps to another core (often 0) … you can activate now again the latest core.

I use the Sync-All-Core setting ( it is kind of overcloacking !) and thus all my cores run in max turbo speed.

To check the former mentioned funny theory that this game is only a single-core game, you can deselect all cores and let only one for msfs… then we will se what happens if msfs is only a single core application :wink:

I have the same problem as you. Tried everything. Only solution to lower the last core is to upper the render scale to 140/150. Off course this will cost you frames.

I’m done with testing. Can not find a solution for this. Tried Game mode, Hags, G-sync on/off, window with dpi settings etc etc. Nothing solves this.

If you read my post you will see that I said: "MSFS is a DX11 game, single core application, so all the draw calls go through a single core (the one you see at 100%), then, secondarily, it delegate tasks to the other cores. "

So I was just saying it depends on the main core. Hardware Unboxed said it was “bizarre” that MSFS was a DX11 game.

Got it now?

1 Like

where can find to adjust or set Sync-All-Core setting.
second question I have is : I install process lasso to adjust max speed but I dont see it.

okay… then was the sentence “MSFS is a singlecore game” a bit misleading.

yes, in DX11 there is, beside of hardware supported calls, a main thread which sync the graphic-request from tasks/threads. But this doesnt mean the app is in generally a singlecore app ( then all DX11 application would be a singlecore application ).
In DX12 can each thread/core send a request directly to the GPU ( queues for different workload ) , also without lots of api overhead. But this must implemented from developers,… things like syncing, resources, etc… - much more work. Somewhat like a main thread still exist, but its not longer responsible for syncing.

Beside of usage of Multi-Core for Graphics stuff, we also should not forget that there are other things where Multi-Core becomes important. Splitt some calculations, multiplayer stuff, file loading stuff, etc… and here we run into the limit what can be splitted ( similar that DX12 needs a lot of objects, if we would see some beneffit ). We just not render a video :slight_smile:

I think we was all are a bit confused, that the newest sim on market was implemented with DX11 ( in special if we know who own directx ). DX12 is much more powerfull in knowing developer hands ( beside of this it’s made for multi-platforms ).

But, I noticed I wrote to much… I agree with you that main thread limit comes mainly from DX11 and additional how the code is implemented ( I’am pretty sure there a lots of optimizations possible ), but want be clarified that this not mean it is a single core app :slight_smile:

it is a BIOS setting.

As mentioned, be aware : it is kind of overcloacking and if not sure about, don’t try it. You also should not overcloack CPU-frequency AND set Sync All Cores.

In my ASUS BIOS it is the following setting:

  • Extrem Tweaker Menue
    – CPU Core Ratio
    — [Auto] [Sync All Core] [Per Core]

( note, I also lowered a little bit the cpu voltage, because ASUS settings go, in my opinion, hard into the limit )

I have no experiance with that tool , so I can’t say something about.

In my opinion usage of such ‘performance optimizer tools’ will change nothing.

A year and a half after the last post in this thread, I know. But even with the latest beta running DX12, I’m still experiencing crippling performance due to a single core being maxed out. With all the incredible innovation Asobo has made in FS2020, it blows my mind that this still hasn’t been resolved. It was the primary bottleneck in FSX as well. There’s got to be something they can do to distribute loads to additional cores. I’m not a developer, but I’d love to know the technical reasons of why this can’t be done.

Plenty of other games and simulators, especially modern ones, distribute equally to all my 18 cores. Examples - RDR2, DCS, Spiderman Remastered, God of War, CP2077, Dirt Rally 2, Universe Sandbox, etc. I’m half tempted to build a dedicated PC when the i9-13900k comes out since it’s rumored to have a 5.8GHz single-core boost clock. I just hate how few PCI-e lanes Intel’s consumer line has, I use all 48-lanes in my current rig and moving down to 24-lanes would be rough.

i9-10980XE @ 5GHz all core
64GB DDR4 @ 4.2GHz CL16
EVGA RTX 3090 FTW3 Ultra
Samsung 970 Pro 2TB
HP Reverb G2 - Primary VR headset (latest revision)
2Gb Google Fiber
Windows 11 Pro

I have a 11th gen cpu and all cores run equally at around 30% at full ultra. Research 11th gen to see why.

  1. Trade-offs in game scale
    In game development, we generally decide on a target platform and optimize the game for that platform.
    In the case of MSFS, the target platform is probably XBoxX.
    (CPU 8Core16Threed 3.8GHz/GPU near then 3060Ti 12TFLOPS FP32/Mem 16G)

MSFS is anyway large in scale. In order to be able to handle a large scale, it is necessary to design for it.
It is difficult to extend a design that speeds up a small problem so that it can be solved by a large problem, so there is a trade-off.

  1. Application parallelization rate
    A processor that is richer than the target and does not speed up in proportion to the number of CPUs.
    If processing is distributed evenly across all cores, it will not be fast in all cases.
    As the size of the software increases, the number of areas that can be parallelized decreases, making it more difficult to improve speed.
    Amdahl’s law is presented as an example of the rate of speedup when parallelization is used.
    Amdahls_law

  2. Existence of automatic overclocking
    Modern CPUs are automatically overclocked without user control.
    Since the clock increase is greater when there are fewer active cores, it is often faster not to distribute the processing, depending on the nature of the processing.

  3. Example of reversing the number of processors and FPS
    Example.
    This is a chart of the CPU load factor over time as I flew a particular course at MSFS.
    The FPS was faster with 8 cores (SMToff) than with 16 cores (SMTon) in all areas.



    Of course, this is not the case in all cases, as the load is much higher in VR, etc., but I am attaching it as an example.
    If you are interested in courses, etc., see another thread.
    https://forums.flightsimulator.com/t/fps-difference-verification-between-amd-5950x-and-5800x3d-at-haneda-airport-rjtt-34l-straight-out-flight/531349?u=kanadenyan

Turn HT off, its not necessary to split physical cores into imaginary ones for msfs although the resulting loss is small.

Cool your 9900k well and then give her spurs.

and loose a good cpu feature… there is normaly no reason to disable HT.

In this example, we tested with SMT off, but we are not saying that this will run faster on all versions of MSFS, nor do we believe that SMToff is the best solution.

I am just showing as an example that the change in the number of cores a CPU shows to an application does not necessarily equal a change in performance.

I am just saying that this was the case with the version of MSFS we tested. (The above data was obtained as of SU10Beta 1.27.09)

1 Like

I agree with that :slight_smile:

HT is also not a real doubling of cores ( more in contrast of that ) and there are rare situations where the overhead ( more that double usage of a real core ) is contra-productive.

I only wanted too mentioned, that general recommendation to disable HT, might not be ‘good’. In much more cases the HT bring a beneffit ( in special if there are many of threads the app need ) .

1 Like

Yes, I agree very much.

HT is an effective solution in many cases, but when thread scheduling is biased towards the same processor core, it often happens that the heat generated by one core can slow down the overall processor speed.
This is because the heat generated by an individual core frequently limits the overall processing performance and does not always match the heat of the entire processor.

However, this is a limited scenario and difficult to replicate, so it is not necessarily identical when replicated by the same player using the same PC and on the same airways.
It will not always be the same because Windows schedules it. It is difficult to measure.

1 Like

hmmm… not sure about that… I thought the core usage circled / rolls.

I think the negative effect can happen, because HT “share cores”. Two threads, where the OS mean they get full core power, use the same core and so each of these get a little bit less power. And not to forget there is an overhead to manage HT in general. So, if the game not realy need what you have effective at threads (virt. cores), it can have negative effect. With newer processors, which have in generally more real-cores, is the chance higher ( e.g my 8700K with 6 / 12 vs. yours with 8 / 16 ) - ( so 12 vs 16 threads to get best beneffit of HT ). Thus, if MSFS not use more than 8 threads ( your case ), it can better to disable HT.

At least, thats how I understood HT :slight_smile:

I wrote „for msfs“ with a reason :wink:.

That HT will give advantages when it is wisely utilized like in some CAD, 3D modelling/rendering or image tweaking tools is no question too.

but as mentioned… it is also not a general rule for MSFS , that disabling HT will bring better performance. And may be , with the next release its then again all ‘new’ :slight_smile:

1 Like

Patterns in which SMT performs worse in gaming applications compared to nonSMT include the following

  1. Processor utilization in SMT
    Of course, SMT shares processor resources.
    SMT is a mechanism to fill the processor pipeline by creating two instruction ports for the processor, which originally had only one, and submitting a large number of instructions, thus increasing the internal utilization rate of the processor.
    The x86-64 processor does not execute the submitted x86-64 instructions directly, but converts them once into micro-operations and executes them. The decode stage for converting instructions is the one that generates the most heat in the processor.
    Of course, the execution pipeline is also prone to heat up due to the amount of instructions buried in it.
    The automatic boost of the CPU clock is determined by the heat, which makes it difficult to increase the clock.

  2. Heat generated by same core assignment in SMT
    When a large number of instructions are submitted to the same CPU core, the number of instructions to be decoded increases, and thus more heat is generated when two CPUs are handled (SMT) than when one CPU is handled (non-SMT).
    If the instructions are not allocated to other cores and happen to be executed on the same SMT CPU, the degree of heat generation will increase and the clocks allocated for automatic overclocking will decrease accordingly.

  3. Serialized application threads
    Not all of an application’s internals can be executed in parallel, and eventually the threads responsible for screen rendering are serialized for processing.
    If there is a thread whose processing is delayed, it will have to wait.
    As the speed of these serialized threads deteriorates, more often than not they are unable to respond within a certain period of time. (The main thread falls into this category.)
    For small time-critical applications, this can lead to stuttering (MSFS) or even noise generation (e.g. digital audio workstations).

Throughput computing and time-critical computing are similar and different.
MSFS demands both. It is difficult.
Neither is better than the other; it depends.

1 Like

Yes, image adjustment tools, etc., are mainly for high-speed execution to completion once pressed and executed, so there is no problem if there is no user input during the process and the total throughput is improved.
Image adjustment tools that require a high number of instructions to be processed over a period of time are well suited for SMT.
Of course, this does not mean that they are not suitable for MSFS, but the requirements are different from those of MSFS, which requires constant response during a certain period of time.