Cpu question

In this example, we tested with SMT off, but we are not saying that this will run faster on all versions of MSFS, nor do we believe that SMToff is the best solution.

I am just showing as an example that the change in the number of cores a CPU shows to an application does not necessarily equal a change in performance.

I am just saying that this was the case with the version of MSFS we tested. (The above data was obtained as of SU10Beta 1.27.09)

1 Like

I agree with that :slight_smile:

HT is also not a real doubling of cores ( more in contrast of that ) and there are rare situations where the overhead ( more that double usage of a real core ) is contra-productive.

I only wanted too mentioned, that general recommendation to disable HT, might not be ‘good’. In much more cases the HT bring a beneffit ( in special if there are many of threads the app need ) .

1 Like

Yes, I agree very much.

HT is an effective solution in many cases, but when thread scheduling is biased towards the same processor core, it often happens that the heat generated by one core can slow down the overall processor speed.
This is because the heat generated by an individual core frequently limits the overall processing performance and does not always match the heat of the entire processor.

However, this is a limited scenario and difficult to replicate, so it is not necessarily identical when replicated by the same player using the same PC and on the same airways.
It will not always be the same because Windows schedules it. It is difficult to measure.

1 Like

hmmm… not sure about that… I thought the core usage circled / rolls.

I think the negative effect can happen, because HT “share cores”. Two threads, where the OS mean they get full core power, use the same core and so each of these get a little bit less power. And not to forget there is an overhead to manage HT in general. So, if the game not realy need what you have effective at threads (virt. cores), it can have negative effect. With newer processors, which have in generally more real-cores, is the chance higher ( e.g my 8700K with 6 / 12 vs. yours with 8 / 16 ) - ( so 12 vs 16 threads to get best beneffit of HT ). Thus, if MSFS not use more than 8 threads ( your case ), it can better to disable HT.

At least, thats how I understood HT :slight_smile:

I wrote „for msfs“ with a reason :wink:.

That HT will give advantages when it is wisely utilized like in some CAD, 3D modelling/rendering or image tweaking tools is no question too.

but as mentioned… it is also not a general rule for MSFS , that disabling HT will bring better performance. And may be , with the next release its then again all ‘new’ :slight_smile:

1 Like

Patterns in which SMT performs worse in gaming applications compared to nonSMT include the following

  1. Processor utilization in SMT
    Of course, SMT shares processor resources.
    SMT is a mechanism to fill the processor pipeline by creating two instruction ports for the processor, which originally had only one, and submitting a large number of instructions, thus increasing the internal utilization rate of the processor.
    The x86-64 processor does not execute the submitted x86-64 instructions directly, but converts them once into micro-operations and executes them. The decode stage for converting instructions is the one that generates the most heat in the processor.
    Of course, the execution pipeline is also prone to heat up due to the amount of instructions buried in it.
    The automatic boost of the CPU clock is determined by the heat, which makes it difficult to increase the clock.

  2. Heat generated by same core assignment in SMT
    When a large number of instructions are submitted to the same CPU core, the number of instructions to be decoded increases, and thus more heat is generated when two CPUs are handled (SMT) than when one CPU is handled (non-SMT).
    If the instructions are not allocated to other cores and happen to be executed on the same SMT CPU, the degree of heat generation will increase and the clocks allocated for automatic overclocking will decrease accordingly.

  3. Serialized application threads
    Not all of an application’s internals can be executed in parallel, and eventually the threads responsible for screen rendering are serialized for processing.
    If there is a thread whose processing is delayed, it will have to wait.
    As the speed of these serialized threads deteriorates, more often than not they are unable to respond within a certain period of time. (The main thread falls into this category.)
    For small time-critical applications, this can lead to stuttering (MSFS) or even noise generation (e.g. digital audio workstations).

Throughput computing and time-critical computing are similar and different.
MSFS demands both. It is difficult.
Neither is better than the other; it depends.

1 Like

Yes, image adjustment tools, etc., are mainly for high-speed execution to completion once pressed and executed, so there is no problem if there is no user input during the process and the total throughput is improved.
Image adjustment tools that require a high number of instructions to be processed over a period of time are well suited for SMT.
Of course, this does not mean that they are not suitable for MSFS, but the requirements are different from those of MSFS, which requires constant response during a certain period of time.