Pulled the late trigger on 5800X3D - will I regret it?

Nope as the technology only covers memory access within the gpu by the cpu and nothing to do with the games. It essentially allows the cpu to access the gpu’s memory in bigger chunks. As per DigitalTrends…

What is Smart Access Memory?

This is a memory technology that AMD officially introduced in 2020 with its RX 6000 line, although it’s based on Resizable Bar technology, which has been around for a while. Basically, it’s a new design that allows the CPU in a computer to access much more of a GPU’s RAM than it previously could.

From your link.

Nvidia also has a block on Rebar functionality.

There is a 3rd party tool that can force rebar called Nvidia Profile Inspecter

I honestly thought you would have to code for it with memory allocation. It seems not, but there are still a few caveats.

Why then do I see Rebar enabled in my MSI BIOS, and Rebar enabled in my NvCP default profile as well?

You cannot run a computer with the information supplied in the bios alone. It takes additional software to connect the dots. This is what drivers are about. Windows comes with its own certified drivers, but these are not all encompassing. For special case use in hardware device you need drivers to tell the application how to run. Another alternative is for the developer of the application writing their own support directly into the programme.

Talk to any Linux user about driver support and you will get a brief 3 hour lecture on the subject.

Got the CPU since two weeks now but only got to get deeper into since that weekend. There is a problem that wasn’t there before: I have very regular stuttering like every 0.25 seconds. It’s nicely visible on the FPS graph and is only main thread. This happened after the upgrade and it’s only FS. I was running the system without PBO, XMP and ReBAR back then and already there the behaviour was obvious. Turning the previously mentioned options did not help. I tried with DX12 and DX11, no improvement. The interesting thing is that the game will start quite normal without stuttering (hangar for example) and then, at the first jump into the cockpit, the stutter starts immediately, turning the PC into a cricket as the coils on my 6900XT are very audible and thus produce short “singing pauses” between every stutter.

I have to investigate further but my guess is FSLTL is the culprit because, as soon as it starts injecting, the rythmic spikes in the main thread appear and do not stop anymore. Also when leaving a flight and returning to the main menu, general performance as well as the globe view tanks from 144 FPS to around 20 to 30, sometimes even less than 10 when FSLTL is still runing. On some occasions I also made the observation that FSLTL sticks a plane “through” the globe. When I zoomed out further, I noticed that it was not only one but like a bazillion planes stuck into each other, “penetrating” the globe :rofl: Something similar happened also earlier this year sometimes when FSLTL was populating an airfield and playing its FR24 routines: in some cases when departing or landing, I would run into such a pile of planes, tanking the performance heavily besides and always parked in the middle of the runway. I also use AIGround and AIFlow to keep FSLTL from going crazy but apparantely, these don’t help.

Did some of you make similar observations?

EDIT: Probably solved! I think have to take back my rant. I am going to assess a bit more but the problem seems to have been a PBO2 which was adjusted a tad too lean with -30 in the Curve Optimizer. I put it to -25 and now everythings seems to run just fine in a traffic heavy environment of Orlando Intl. With DX12 I could gain another 5-6 FPS and I am now between 40-45 FPS in a very big and busy airport and very happy. The CPU swap in general gave me another 15 to 20 FPS in the game. The thing in the screenshot however is an original FSLTL “feature” and stops as soon as also the injector stops. Do the devs know about that?

And just out of interest: which PBO settings do you guys use? I am currently running

PPT 100
TDC 70
EDC 100
CO -25

Should I still crank that up a bit?

Happy for you on your FPS gain from the CPU upgrade.

Have you read the post from further up in this exact thread?

Thank you for pointing that out. I missed this. Anyway, my settings work but I will experiment around a bit with other ones in the next days.

BTW: I finally did it and got myself some Virpil ACE pedals. Couldn’t say no for a used price of 250€. And they ain’t coming any cheaper.

1 Like

I’m very much on the fence about undervolting my 5800X3D.

What’s the purpose? To allow the CPU to remain in boost longer - essentially, ‘overclocking’ without actually overclocking. It’s more beneficial for CPU’s with high TDP that tend to run hot at higher boost frequencies. The 5800X3D isn’t one of those.

Why does a CPU lower its clock speed? In response to thermal increases caused by higher voltages.

This CPU can run at 85°C without throttling. In fact, AMD recommends that it be allowed to run at higher temps. Even without undervolting mine never got above 70°C, and to my casual observer’s eye my clock speeds remained at or near the boost level.

As an experiment, I’ve been undervolting mine, and while temps did drop to around 55°C while flying, I haven’t noticed any appreciable increase in FPS or clock speed. In fact, with a casual glance, it ‘seems’ like my clock speed isn’t boosting as much. Flights are smooth, though.

I want to run some flight tests with some monitoring software that will track temps, load, and per core clock speeds, with the only variable being PBO voltage settings.

Currently my settings are:

PPT = 100
TDC = 70
EDC = 100
CO = -30

I just disabled PBO to see if I notice any changes. I did switch from water-cooled to air-cooling the CPU recently. It didn’t seem like that affected the temps either way. But it is a variable…

I can only tell you that it is not about overclocking but the principle is similar and about maintaining a stable and reliable DIE structure. We are currently more or less at the limit of physically feasible lithography, that’s what I understood so far. Means, traditionally putting more transistors on a DIE will sooner or later lead to unreliability due to the extremely low tolerances between the individual transistors. This can lead to bottlenecking of those or whole cores, voltage bridges, more wear and tear because of heat and so on. Do not nail me on that, there might be people around who really have their physics ready and see through all this. But that’s what I understood.

Lowering the voltages gives more headroom to voltage and heat peaks and also to the integrity of the cores. They will run with less processing errors. If it will work faster heavily depends on the game. AFAIK FS2020 is definitely profiting, but maybe not as much as other titles. It is at least not one of the titles which are in the top places of such a list, more in the middle. And it is questionable if this has to do with PBO or with 96MB V3 Cache. I for my taste have a much smoother, less stuttering experience now. No one should expect a gain of 20 FPS+ by only switching the CPU. After all, FS2020 is still horribly coded when it comes to access and efficiently use multiple cores. But then, the sim is just in the tradition of nearly any other game so far.

Also, it depends what kind of resolution you use. The CPU makes a much bigger difference at 1080p than it does with 1440p or even 4K. I have 32:9 1080p screen which has on the other hand the same number of pixels as a classic 2560x1440p screen (4MP), so my results will already be more limited than for people with a normal FHD screen.

By using the curve optimizer of -25, my CPU is running up to 10°C cooler for me (60-65°C now).

I’m also seeing it hold 4.45 GHz on all cores in busy, complex scenery areas with a decent amount of AI traffic.
Given that smoothness, rather than high FPS is my goal, I’m happier since I started using CO, than before using it.

That’s why I want to run some actual tests, with minimal variables, i.e, same plane, same sim settings, like cache settings, weather, time of day, traffic, location (airport, parking spot and airborne at different altitudes, etc.) while monitoring chip parameters over time.

I think subjective perception is valuable, but not ideal when it comes to optimizing things like voltage curves.

1 Like

Not subjective.

Always run the same test flight with the Fenix A320 from EGMC rwy 23 via LAM to EGLL rwy 27R.
Same static weather preset, same time of day, same payware scenery, same MSFS settings.

Perform these tests late on a weekday evening to avoid any variability from server load.

Tried with SMT turned off yesterday, but sim wasn’t as happy as with it turned on. Something I didn’t expect for a program that hammers a single thread.

What I mean is the perception of performance metrics based on snaphot views.

Sometimes I look at hardware monitor and see my core frequencies mostly maxed out. Sometimes they vary quite a bit. What’s causing that? There no way to know what’s really going on without at least some sort of statistical analysis - which at the very least means data collection with good control of variables.

In your example you didn’t need to see much to tell you that turning SMT off resulted in clearly lowered performance. I can see the same thing if I turn off XMP. But when it comes to PBO, the results are much less clear unless I take a longer look, and a deeper dive into the test results.

1 Like

Electric current flows easier at lower temps. The hotter a metal gets the more the atoms vibrate. This causes an increase in collisions and efficiency of the flow of the current. This comes under Ohm’s law.

A very good video here. He has a good visual example of how electricity flows through a material @ 6 minutes 21 seconds.

Thanks, but it was a rhetorical question.
And actually…

Energy isn’t carried from source to load via electrons. Energy doesn’t even flow down the wires. Instead, electrical energy travels from the electrical source to the electrical load via an electromagnetic (EM) field in the space surrounding the source, wires, and load.

Look at the picture below of a DC circuit consisting of a battery, some wire and a resistor. The green arrows represent the magnetic field that arises due to current flow. The red arrows represent the electric field due to the voltage source. The blue arrows represent the energy flux density, or the Poynting vector, which is the cross product of the electric and magnetic fields. The Poynting vector can be thought of as the rate of energy transfer per area.

Notice the flow of energy is from the battery to the resistor. Also notice that the energy flows into the resistor not from the wire but through the space surrounding the wires.

Energy flow in a DC circuit

It is true that resistance is proportional to voltage and current (Ohm’s law) and that resistance produces heat due to the vibrational motion of electrons within the conductive medium.

I think the main problem with closely packed transistors (like on a CPU die) is that field densities overlap, causing computational errors when induction occurs in adjoining traces.

100°C won’t melt the CPU. Power limits exist to prevent those computational errors, which of course can lead to software failures (CTD’s).

2 Likes

@BegottenPoet228

Since you want to dive into more technical approach to performance, look into trimming down the latencies of the timing supplied in the XMP. XMP is a one size fits all within the category the supplier creates. If you dig deeper and trim timings down, you can gain the highest returns. OFC power plays a part in this also as power can cause instability.

Buildzoid did a really good break down of each timing limitation for DDR5 here. Playing with the timings will inevitable create instability as you deviate from the safe values given by the manufacturer. I use Karhu to check stability.

1 Like

I have HP V10 DDR4-3600/CL14
BIOS set for XMP2
My understanding (I could be wrong) is that XMP2 uses the SPD profile settings provided by the RAM. I’m loathe to mess around with that.

Do you see anything here I should tweak?

CPU-Z Memory Timings

Every memory kit has different tolerances. As I said, the settings that come in XMP are slack to cater for all kits of that category. Depending on what you want to use your system for, varys for the best case with your timings. Large file use will rely less on latency and more on throughout. Games are small files with snatch and grab processes. This is why low latency is king over frequency to a certain extend. At some point you have to expect a higher latency for speed gain.
Aida is the best app I know for performance readouts. What you are showing there is cas latency. There is so much more to timings than this. The most important thing being stability. You can get a low latency result with a set up that is running bad stability, but you will find you can get stuttered as it tries to resolve errors. If your system is stable then latency is your next best gain.

image

This is my aida 64 readout. For my kit which dual ranked the same as yours I have 60ns. This is towards the best you can get. I have seen some report 58ns with stability proof attached. I have yet to see anyone post better than 58ns with proof of stability. This is ofc on an AMD system, which have a higher latency with the controller than Intel. Since you mentioned PBO, I would hazard a guess you are on an AMD system.

By all means run a bench on your with AIda. Aida64 is available under a free trial for 28 days.

Since this is derailing the post, I would suggest using a PM if you want to talk more. I can supply a discord link also if you want me to look at your timings. Adjusting memory timings does not damage the kit. But you should always back up your most stable profile to get back up and running quickly. I have yet to touch an AM5 system and my last intel was a 6700 in 2017.

2 Likes

Thanks for that. I don’t want to derail the thread anymore either, so I’ll run Aida64 and PM you the results.

1 Like

I found the solution for the stuttering as it came back later on any occasion, though it was not always noticable but still visible in the main thread. I had already reset PBO and UV on my GPU, nothing did the trick. Then I checked other threads and noticed that some people had more or less regular latency issues with the last SU if they were also running FSUIPC at the same time. I did an update of the latter , and the stuttering was gone. My version was quite old and from the beginning of the year because I didn’t fly for a very long time until now.Also the graph doesn’t show anything anymore which means it’s gon for good now. Before, the spikes in workload could still be seen and often (but not always noticable) affected the flow of frame times massively.

BTW guys, I don’t think you’re hijacking the thread that much. These are all valid points considering SMT, PBO and the like. Only if it comes down to electrical physics and higher math, that maybe goes a bit too far but other than that, I think deeper insights of how these technologies work and why they work are very helpful and give a better understanding.

1 Like