Optimizing the BIOS settings for a Gigabyte B550 Gaming X V2 motherboard?

I have just upgraded my original Gigabyte B550M DS3H motherboard to a newer Gigabyte B550 Gaming X V2.

I read a “tweaking” article about optimizing the BIOS settings for this broad class of motherboard:
https://forums.blurbusters.com/viewtopic.php?f=10&t=10232#p82731

Note that I am not a die-hard overclocker with a liquid-helium CPU/GPU cooler, neither do I have my system in a liquid-nitrogen bath with magnetic stirrers to keep the cryogenic fluids moving.

Likewise, I don’t get insane tweaking CPU/memory/GPU clocks, voltages, timings, and so forth.

I usually pick what seems to me to be the obvious “low hanging fruit” - things like enabling CPU access to VRAM, setting the ram profile to its XMP settings, turning off virtualization, etc. etc. etc., disabling features I obviously don’t need like “Wake On [something]”, preferring a more conventional “Wake On Power Button”.

I have an AMD Ryzen 7 5800 X3D, an ASRock Radeon Challenger Pro (RX 68000) with 16GB of VRAM along with 32gb of GSkill Aurora “gaming” memory (DDR4).

I am interested in whatever I can reasonably do that can/will improve the performance of my system.

No.  I don’t expect a “magic bullet” that will transform my humble system into the latest and most expensive Gaming Beast.

What I hope to find is expertise from people who do make a living tweaking their systems so that I can use my hard-earned system hardware in the most efficient way possible, maximizing performance without spending hours agonizing over tenths of a volt or individual CAS refresh cycles.

I understand that the “stock” (default) settings aren’t a very efficient use of the hardware’s capabilities, opting for a very conservative configuration.

Perhaps there are settings buried a little bit deeper under the hood that can make a distinct and measurable improvement in the way MSFS-2020/2024 performes?

I am interested in whatever advice and experience you folks can offer.

Thanks!

1 Like

Hi @Jimrh1993 ,

Thank you for your post! Your topic has been moved to a sub-category of the User Support Hub

The General Discussion category is meant for discussions that fall outside of our other sub-categories.

If you would like other users to help you with an issue you are experiencing in the sim, consider these User Support Hub categories for your future post:

Aircraft & Systems
ATC, Traffic & NAVAIDs
Crashes (CTDs)
Hardware & Peripherals
Install, Performance & Graphics
Scenery & Airports
User Interface & Activities
Virtual Reality (VR)
Weather & Live Weather
Miscellaneous

2 Likes

Thanks for your help!

I was looking for the “hardware” section, but didn’t find it. (hopefully that’s where you plopped it.)

Thanks again for your diligent efforts.

2 Likes

If your RAM is running at the correct speed and you’re not experiencing any issues, I honestly wouldn’t bother changing anything.

I’ve done it in the past but just don’t think it’s worth the effort these days.

3 Likes

I agree that stressing over fractions of a volt to gain an extra 50hz is pointless.

My problem is that, to a great extent, I really don’t know what matters and what is just stale snake-oil that hasn’t been changed in years.  (And don’t get me talking about the oil’s filter!!)

I am also well aware that MSFS-20/24 isn’t an Excel spreadsheet macro - it tends to beat on systems in strange, new, and bizarre ways - making users want to maximize horse-power to eliminate stuttering along with silent-film frame rates.

This is why I am asking.

Hopefully someone will be able to help guide both myself and others along this rocky path.

Since you have RBAR and > 4GB Access enabled, and XMP enabled, there’s not much else to do unless you want to get into tweaking secondary and tertiary memory timings and hacking the 5800X3D to allow overclocking.

For the memory I’d recommended asking questions on overclock.net. They are the memory experts (not that there aren’t any here.)

As for unlocking the CPU. I wouldn’t do that.

A couple of things you can do:

  • Adjust your CPU fan curves to allow temps to get up to (but not over) 85°C running Cinebench.

  • Disable Virtualization. You don’t need it unless you are running Virtual Machines (and I’m betting you aren’t.)

  • Experiment with disabling Hyperthreading. Test performance with CapFrameX.

1 Like

You’re as bad as me!  (I can’t count either. :wink:)

  1. My fans are all set, (GPU, CPU, and case fans), to start at 25%, then speed up starting at about 35° C to 100% at 60°. (Cooling is King)  I haven’t tried torture testing yet.
  2. Virtualization is disabled.
  3. I have no idea what AMD calles “hyperthreading”.
    Interestingly enough, I’ve heard conflicting information about the advisability of enabling/disabling hyperthreading.  I’ll go check on it though.

Part of the idea of asking here is that I’ve seen videos on YouTube where various PC pundits say that MSFS itself is an excellent torture test and if the machine can hack running MSFS (2020), it can pretty much handle anything you throw at it.

Haha. I thought about it, but got busy.

Yes, cooling is king, but it’s not always better.

This quote from AMD’s design engineer:
“Yes. I want to be clear with everyone that AMD views temps up to 90C (5800X/5900X/5950X) and 95C (5600X) as typical and by design for full load conditions. Having a higher maximum temperature supported by the silicon and firmware allows the CPU to pursue higher and longer boost performance before the algorithm pulls back for thermal reasons,” Hallock said.

In your motherboard’s manual, search for:

SMT Mode (Hyperthreading - AUTO by default)
SVM Mode (Virtualization - Disabled by default.)

I would also disable this:

AMD Cool&Quiet function
[Enabled] Lets the AMD Cool’n’Quiet driver dynamically adjust the CPU clock and VID to reduce heat output from your computer and its power consumption. (Default)
[Disabled] Disables this function.

1 Like

Cool & Quiet is disabled.
SVM is disabled.
SMT may also be disabled but it’s late at night here in The Evil Eastern Empire and the granddaughters are asleep, so I’ll check that tomorrow.
 

Far be it for me to argue with him but. . . .

As an example:

When I first installed my Radeon 6800 video card one of its “claims-to-fames” was the “Zero Fan” feature where the GPU got right up to 40-or-so degrees before even thinking of running. Once the GPU reached that temperature, getting it cooler was a virtual impossibility even at higher fan speeds.

I disabled the zero fan feature and set the minimum GPU fan speed to 25%, rising rapidly after 35° reaching a max of 100% around 60-65°

The result? The GPU never gets hot enough to accelerate the fan except in rare cases, even within MSFS-2020.

I’ve also seen this effect with CPU cooling.  The sooner you start, the easier it is to keep heat under control.

IMHO, it appears to be like trying to push a car uphill.

If you’re already stopped on the side of the hill when you start pushing, it’s almost impossible to get any serious momentum.  However if you start pushing on flat ground before you get to the hill, your stored momentum carries you up the hill easily.

Thermal gradients appear to work the same way.  If you start with a more shallow thermal gradient, (you start cooling earlier), you don’t run into the situation where the thermal “hill” is already against you when you start.

I’m not enough of a thermodynamaticist to explain (or even understand!), the complex mathematics of it, but my experience is that the sooner you start cooling the more performance “headroom” you have to play with before you reach that 85-90° threshold.

It’s just been my experience over the years that this is the way it works.  If you start cooling earlier it doesn’t get ahead of you.

I use a fairly aggressive fan curve on my (air-cooled) 7950X3D.
I adjusted it for 90% fan speed and 85°C running Handbrake encoding software, which uses all 16 cores almost to max utilization.

The CPU is around 42°C at idle, 75°C during sim startup and in the World Map, and stabilizes around 55-60°C during flight.

Most of that comes from doing a per core undervolt, allowing core frequencies to boost longer before pushing into the thermal node of the 'Frequency-Temp-Voltage triangle.

I’m not suggesting that we run our CPU’s at 85°C. All I’m saying is that I think it’s smart to set a cooling benchmark using something like Cinebench or Handbrake. How you adjust your fan curves is up to you.

1 Like

Low hanging fruit is your right approach. It will become obvious what helps.

CPU access to VRAM? Never knew that was an option.

The danger of extreme overclocking is unreliability and occasional crashes.

1 Like

On my Gigabyte MoBo it’s under “peripherals” and there are two settings:  One to allow access to memory above 4gig and the one immediately following it which allows the video card to do what it needs to do.
 

I discovered that the more extreme tweaking suggested in the article I referenced actually REDUCED the performance of MSFS-2024 to the point that it was totally unplayable.

However, clearing back to defaults and setting the “low-hanging fruit” optimizations improved things dramatically.

I am going to continue researching this and I will report results.

Thanks!

2 Likes

That refers to RBAR (Resizable BAR.) It’s enabled / disabled in BIOS, and can be used on nVidia RTX 30xx-series GPU’s.

All gaming PCs produce an on-screen image by way of the CPU processing data – textures, shaders and the like – from the graphics card’s frame buffer. Usually the CPU can only access this buffer in 256MB read blocks, which obviously isn’t very much when modern GPUs regularly have 8GB of video memory or much, much more.

Resizable BAR essentially makes the entirety of the graphics frame buffer accessible to the CPU at once; where it could once sip, it now guzzles. The idea is that once textures, shaders and geometry are loading in faster, games should run faster with higher frame rates.

1 Like

…more spesifically GPU Upload Heap, that became a part of the DX12 Agility SDK yearish ago.

" For gamers, the only requirement you’ll need is Resizable-Bar or Smart Access Memory support on both your CPU and GPU. Resizable-bar is the foundation for GPU Upload Heaps since the feature enables Windows to manage GPU VRAM directly."

1 Like

Cool.

Based on the March 2023 article it could help those folks struggling with VRAM issues. It’s been part of the nVidia drivers since 5.31.41.
Does the sim utilizes the ‘Heap’ code?

Doubtful, since sim updates always revert the DLSS DLL back to v2.4.

1 Like

Only the GPU executes shaders, sent from the CPU. But I understand the “shared” VRAM memory (and essential locking for write access etc)

Regarding 256MB blocks, 32 transfers for 8GB may depend upon the PCI chipset DMA (direct memory access). Bigger blocks make sense, now we are seeing 24GB GPUs.

Btw:
My 1080TI has 11GB VRAM and Windows shares up to 10GB of RAM with the GPU, but typically around 2GB.

1 Like

Got it. I just wondered what was meant by “from the graphics card’s frame buffer” in the RBar description.

Aren’t frame buffers typically in memory? Is there some other memory on a graphics card that isn’t VRAM? Maybe something like L1 cache tied directly to the GPU?

The AMD drivers, (and every AMD motherboard I’ve seen - both of them (:wink:), had RBAR (Resizable BAR) - and why does that remind me of steel rods for concrete?  I’m tempted to hack the firmware to rename it ReBAR.

There’s another memory setting, (Allow memory access above 4gig) that is a prerequisite for it and must be enabled for RBAR to work.

You can then go into the Adrenaline driver suite for the AMD graphics card, verify that the driver sees the entire VRAM on your card, and also check for Smart Access Memory being enabled.

Yep. the AMD equivalent to nVidia’s RBAR is SAM.
I wonder if they’re serving Mac n’ Cheese?

It’s RBAR in the BIOS and SAM in the driver utility.