Jump to content

Ryzen 5600x Bios settings


Recommended Posts

Haven't used AMD since Athlon so BIOS setting is very confusing to me and need some advice.  I have Asus Strix B550-F as motherboard and 32GB DDR4 3600.

 

I'm not trying to overclock too much at this point as I'm only using stock HSF and don't see much point since my graphic card is still GTX 1080.  But just trying to find out which is better setting to use.

 

  1. If I leave BIOS at default, core speed fluctuates from 3.6Ghz to 4.6Ghz.  I understand it's Base + Boost.
  2. If I set TPU1 in Bios which I think is Asus's auto overclock, then core is locked at 4.2Ghz.

 

Even tho #1 benchmark higher, I seem to get some pauses while working with graphic apps like Maya when executing a task.

  • Like 1
Link to comment
Share on other sites

Id make sure XMP profile for the memory is selected in bios otherwise the ram will run at only around 2666, this can be checked in Task Manager in Windows to see if the memory is running at the 3600 rated speed or not.

 

I wouldnt bother with overclocking a Ryzen straight away as it does a decent enough job stock.  It keeps a little performance in reserve for when you upgrade cooler/gfx later down line and can get tiny boost from an overclock.


Edited by Gerg
Link to comment
Share on other sites

4 hours ago, Gerg said:

Id make sure XMP profile for the memory is selected in bios otherwise the ram will run at only around 2666, this can be checked in Task Manager in Windows to see if the memory is running at the 3600 rated speed or not.

 

I wouldnt bother with overclocking a Ryzen straight away as it does a decent enough job stock.  It keeps a little performance in reserve for when you upgrade cooler/gfx later down line and can get tiny boost from an overclock.

 

 

It's called DOCP instead of XMP and that's on.  And that wasn't the question.

3.6 boosted to 4.6 or fixed 4.2.  Which is faster in normal usage is the question.

It's not really overclocking as temp is actually lower with TPU.


Edited by Taz1004
Link to comment
Share on other sites

Stock will be faster in general as it will boost above 4.2 in apps that use only a few cores.  It might also boost above 4.2 when under heavy loads depending on how good its cooled but it varies from system to system.

 

 


Edited by Gerg
Link to comment
Share on other sites

Hi Taz,

 

I have a 5600x. This is only my opinion. Set your DDR4 to XMP/DOCP, same thing. Set the Dram speed manually, for you 3600mhz. Set the Flck manually for you 1800mhz. Set the Dram voltage manually 1.35v. Leave everything else on auto. This in my opinion ensures dram stability and stops auto going odd in the background.

 

You can download Ryzen Master if you wish, keeps an eye on things. I find on mine that 4.6Ghz is more than enough for games etc. I run manual settings in Ryzen Master of 4600 on all cores and the voltage capped at 1.2v. Keeps it cool and reduces unnecessary power usage. This will help if your on a stock cooler. The voltage isn't fixed at 1.2v its capped. 

 

Thats how I run mine, each to their own.

  • Like 1
Link to comment
Share on other sites

1 hour ago, Bossco82 said:

Hi Taz,

 

I have a 5600x. This is only my opinion. Set your DDR4 to XMP/DOCP, same thing. Set the Dram speed manually, for you 3600mhz. Set the Flck manually for you 1800mhz. Set the Dram voltage manually 1.35v. Leave everything else on auto. This in my opinion ensures dram stability and stops auto going odd in the background.

 

You can download Ryzen Master if you wish, keeps an eye on things. I find on mine that 4.6Ghz is more than enough for games etc. I run manual settings in Ryzen Master of 4600 on all cores and the voltage capped at 1.2v. Keeps it cool and reduces unnecessary power usage. This will help if your on a stock cooler. The voltage isn't fixed at 1.2v its capped. 

 

Thats how I run mine, each to their own.

 

Thanks, I did some testing on my own since I couldn't find any information on this.  All just talking about RAM.  I tried subscribing to AMD forum and their email verification crap doesn't work.

 

I did undervolt the CPU in the BIOS by 0.1v and that seems to lower temp by 5' with no performance loss.

As for clock speed, Default benchmarks higher in Cinebench but no difference in DCS frametime.  I'm sure because my GPU is limiting.

But TPU 1 fixed at 4.2Ghz seems to run at lower temp and it may be placebo but seem smoother in DCS.  So TPU 1 with 0.1v undervolt is what I have it atm.


Edited by Taz1004
Link to comment
Share on other sites

Have you enabled PBO ?

 

That's what drives the CPU to it's limits within safe borders and I would not consider it overclocking in the traditional way as it meant to be used by AMD.

 

I just put a 5600X together and did some tests, the difference with and w/o PBO is there but it is not that dramatic, you will likely see a 100-200MHz delta to

default settings. It greatly depends on your CPU cooling solution how much more PBO can give you.

 

PBO = Precision Boost Overdrive

 

edit:  I ran Cinebench R15/20/23 up and down the ladder and the difference was 4.4GHz vs. 4.6GHz aka 200MHz most of the time.

 

I assume, since DCS will pull far less than Cinebench does, you will likely have the cores locked at 4.6G with PBO enabled, out of the box, no fiddling needed.

 

 

I have to say, I am tempted to retire my 8700K and move AMD CPU + Board, this 5600x smoked my 5G-8700k in every aspect, without PBO ! Employ PBO and the delta grows even more, aka Frametime will become lower in DCS terms.


Edited by BitMaster

Gigabyte Aorus X570S Master - Ryzen 5900X - Gskill 64GB 3200/CL14@3600/CL14 - Asus 1080ti EK-waterblock - 4x Samsung 980Pro 1TB - 1x Samsung 870 Evo 1TB - 1x SanDisc 120GB SSD - Heatkiller IV - MoRa3-360LT@9x120mm Noctua F12 - Corsair AXi-1200 - TiR5-Pro - Warthog Hotas - Saitek Combat Pedals - Asus PG278Q 27" QHD Gsync 144Hz - Corsair K70 RGB Pro - Win11 Pro/Linux - Phanteks Evolv-X 

Link to comment
Share on other sites

3 hours ago, BitMaster said:

Have you enabled PBO ?

 

That's what drives the CPU to it's limits within safe borders and I would not consider it overclocking in the traditional way as it meant to be used by AMD.

 

I just put a 5600X together and did some tests, the difference with and w/o PBO is there but it is not that dramatic, you will likely see a 100-200MHz delta to

default settings. It greatly depends on your CPU cooling solution how much more PBO can give you.

 

PBO = Precision Boost Overdrive

 

edit:  I ran Cinebench R15/20/23 up and down the ladder and the difference was 4.4GHz vs. 4.6GHz aka 200MHz most of the time.

 

I assume, since DCS will pull far less than Cinebench does, you will likely have the cores locked at 4.6G with PBO enabled, out of the box, no fiddling needed.

 

 

I have to say, I am tempted to retire my 8700K and move AMD CPU + Board, this 5600x smoked my 5G-8700k in every aspect, without PBO ! Employ PBO and the delta grows even more, aka Frametime will become lower in DCS terms.

 

 

Even PBO is different from Asus to MSI to AsRock.  PBO limit under advanced setting is used to undervolt in MSI but that's under CPU offset in Asus and PBO recommended to turn off along with CBS when undervolting.  Which is another confusing part.

 

Currently my PBO is set to Auto as I'm not looking to overclock.

Link to comment
Share on other sites

To actually get back to your original question: 

In the majority of cases the default 3.6-4.6 will be the faster solution.

The system will allow all cores to go to 4.2-4.4G as long as the temp stays ok.

In that state, your 5600X will draw max 73-75w when fully stressed.

 

When you employ TPU with it's 2 settings, you basically allows Asus' AI chip to Auto-OC your rig.

You said overclocking wasn't your primary goal for now ( which I understand, run properly before you fiddle with it 🙂  ).

TPU on Asus-Intel is not my pick, I have a few Asus Intel boards under my regime and I dont even bother TPU, it's too brutal in Volts.

 

With your intention in mind not to overclock too much at first, I think PBO is the much better option vs. Asus TPU settings.

 

To find out which one solution produces which results, try it out and run HWinfo with full Sensors, you can see the Wattage difference under the same load ( Cine R23 for example ) and how much temp that causes.

 

I can tell you right away, the included Cooler does not really allow PBO. It works for a R23 run but gets borderline HOT. If you use the included fan, leave it ALL OFF if you intend to run things that really use all cores for a serious amount of time ( 90°C for 90minutes ain't cool ).

 

I am almost sure that TPU 1 ( 2 for sure will ) will exceed the PBO wattage and thus temps.

 

Use HWinfo and check yourself how much the cooling can take in your scenario. The included fan may do DCS with PBO just fine, but not an all core brutal Handbrake session for ours

  • Like 1

Gigabyte Aorus X570S Master - Ryzen 5900X - Gskill 64GB 3200/CL14@3600/CL14 - Asus 1080ti EK-waterblock - 4x Samsung 980Pro 1TB - 1x Samsung 870 Evo 1TB - 1x SanDisc 120GB SSD - Heatkiller IV - MoRa3-360LT@9x120mm Noctua F12 - Corsair AXi-1200 - TiR5-Pro - Warthog Hotas - Saitek Combat Pedals - Asus PG278Q 27" QHD Gsync 144Hz - Corsair K70 RGB Pro - Win11 Pro/Linux - Phanteks Evolv-X 

Link to comment
Share on other sites

3 hours ago, BitMaster said:

To actually get back to your original question: 

In the majority of cases the default 3.6-4.6 will be the faster solution.

The system will allow all cores to go to 4.2-4.4G as long as the temp stays ok.

In that state, your 5600X will draw max 73-75w when fully stressed.

 

When you employ TPU with it's 2 settings, you basically allows Asus' AI chip to Auto-OC your rig.

You said overclocking wasn't your primary goal for now ( which I understand, run properly before you fiddle with it 🙂  ).

TPU on Asus-Intel is not my pick, I have a few Asus Intel boards under my regime and I dont even bother TPU, it's too brutal in Volts.

 

With your intention in mind not to overclock too much at first, I think PBO is the much better option vs. Asus TPU settings.

 

To find out which one solution produces which results, try it out and run HWinfo with full Sensors, you can see the Wattage difference under the same load ( Cine R23 for example ) and how much temp that causes.

 

I can tell you right away, the included Cooler does not really allow PBO. It works for a R23 run but gets borderline HOT. If you use the included fan, leave it ALL OFF if you intend to run things that really use all cores for a serious amount of time ( 90°C for 90minutes ain't cool ).

 

I am almost sure that TPU 1 ( 2 for sure will ) will exceed the PBO wattage and thus temps.

 

Use HWinfo and check yourself how much the cooling can take in your scenario. The included fan may do DCS with PBO just fine, but not an all core brutal Handbrake session for ours

 

Opinion on AMD setting seems to be all over the place.  I don't like these guys video because they never get to the point but the question this video is addressing is similar to my issue.  I seem to be getting frame stutter with default when cores auto throttle.  But again, benchmarks higher.

AMD's Precision Boost Overdrive — Turn PBO OFF to Improve Performance? - YouTube

 

All TPU1 does is fix core multiplier.  And doesn't increase wattage.  And it generates less heat because the CPU seems to generate most heat when they're running at 4.6Ghz.  And with TPU1, it doesn't get that high.  Another thing I don't get is with default 3.6-4.6, cores never run in between.  It's either 3.6 or 4.6.

 

Undervolting however decreases power and temp.  And as a result cores run at higher speed.

 

Seems odd that this board and CPU has been out for over 6 months and still can't find solid information.

Link to comment
Share on other sites

I think I figured out the issue.  It was heat.  All the reviews said stock HSF is fine for "normal usage" if you're not overclocking.  They must be talking about browsing internet.  Stock HSF is not fine for gaming.  Not for CPU intensive game like DCS.

 

I did have Corsair H60 AIO that I used for i7-4790K.  I was saving it for when I build another server with that old chip.  And most reviews said stock HSF is "fine" so that's what I used on the 5600X.  But now, I put that AIO and the stutter is gone and temperature is 15' celcius cooler.  And yes, the stock HSF was mounted properly.  Thermal paste distribution was very even when I took it off and idle temp was around 40'.

 

So when the cores boost and reach over 72', they started throttling back.  And when they throttle back, there's no in-between.  They throttle back to 3.6.  Which I think is cause of stutter.  Also consistent with my other results.  That TPU1 was smoother because its temp was lower.  And Undervolting was smoother because it lowered the temp.

 

Unless only thing you're doing is browsing the internet or YouTube, get better HSF.  Even if you're not overclocking.


Edited by Taz1004
  • Like 2
Link to comment
Share on other sites

HI Taz,

 

I was limited to the stock heatsink when I first got my 5600x. I used Ryzen Master to set all cores to 4200 @ 1.1vcore. It was fine even in DCS as it only uses 2 cores for heavy usage. Ie above 4ghz. Ryzen Master is a good utility for AMD as you can alter your voltages on the fly in Windows if you experience problems.

 

If your now using an AIO, I am guessing, but I bet you can leave pretty much everything CPU only related on auto. Using Ryzen Master is not better than the bios for having control of the CPU speeds and voltages. Its just a lot more convenient once you get your head around it. 

Link to comment
Share on other sites

41 minutes ago, Bossco82 said:

HI Taz,

 

I was limited to the stock heatsink when I first got my 5600x. I used Ryzen Master to set all cores to 4200 @ 1.1vcore. It was fine even in DCS as it only uses 2 cores for heavy usage. Ie above 4ghz. Ryzen Master is a good utility for AMD as you can alter your voltages on the fly in Windows if you experience problems.

 

Thanks but I'm not sure how long ago that was but DCS 2.7 uses all cores now.  Seems to use more of physical cores than logical cores but still is frametime difference between turning SMT on or off.  I am aware of Ryzen Master but I prefer to do it in BIOS.

 

Also, I'm guessing you haven't tried undervolting?  There's no excuse for not doing it.  Even with stock HSF.  Mine benchmarks at 5080Mhz with same voltage and heat.  Versus 4700Mhz in Auto.  I used exact setting he used in the video below.  With -30.

 

 


Edited by Taz1004
Link to comment
Share on other sites

Im on DCS 2.7. I see all cores loaded but only a heavy load on 2 cores. I alt/tab out and can see the usage history in Ryzen Master. I set mine to 4650 with an undervolt of upto 1.2v. This runs DCS 2.7 just fine for me a solid 60fps using vsync.

 

I havn't come across that video though. Thanks for sharing I'll check it out and maybe try those settings.

  • Like 1
Link to comment
Share on other sites

  • 2 weeks later...
On 6/4/2021 at 2:51 AM, Taz1004 said:

I'm not trying to overclock too much at this point

Have you tried the Asus AI Suite, which I believe will conservatively automate the overclocking process...

AMD Ryzen 5 5600X; ASUS ROG Strix X570-F, Corsair Vengeance 64 GB (2x 32GB) 3600MHz; Seagate FireCuda 510 500GB M.2-2280 (OS); Samsung 860 EVO 2TB M.2-2280 (DCS); MSI GeForce RTX 3090 SUPRIM X 24GB OC GPU. TM Warthog Hotas; T.Flight Pedals; DelanClip/Trackhat.

Link to comment
Share on other sites

On 6/9/2021 at 11:39 PM, Taz1004 said:

 

Thanks but I'm not sure how long ago that was but DCS 2.7 uses all cores now.  Seems to use more of physical cores than logical cores but still is frametime difference between turning SMT on or off.  

 

 

 


Just for the record, the way Windows show CPU usage is most often not accurate. An application or game might use single core only and it will still show roughly even load across the board.

 

I can't recall how the special CPU core utilization method was called, but best source I find is here (at about middle of the blog article): 

http://blogs.technet.com/b/winserverperformance/archive/2009/08/06/interpreting-cpu-utilization-for-performance-analysis.aspx

 

The problem is that even single threaded application can be switched between cores constantly (and several times per second) so, for example, a six-core CPU at maximum single threaded load can show ~16.6% utilization of each core. 
And while it would be true that each core is doing proper work, it would be the same as if we got only one core (it's spreaded workload instead).

And of course it's possible for applications to override this and get specific core affinity (no idea if DCS does this and, if yes, to what extent) so it would show only two or three cores on 100% load. 
There is also CPU parking which puts some cores to sleep, to distort things even more.

 

Different CPU types have can show different utilization for the same software due to differences in architecture, so it's really hard to get some useful information from resource monitor CPU usage (such as Task Manager).


I've remember reading some comments here in the forums that DCS 2.5, like DCS 1.5 before it, utilizes no more than 2-3 cores at maximum (I think physics aren't multi-threaded, but things like audio are likely running on a separate thread).

...I really doubt anything has changed for latest 2.7 iteration.

 

Even in a world where 16 core processors will soon become mainstream, we'll still be stuck at strong(est) single-core performance when choosing a CPU for DCS, unfortunately.

Situations like a quick 4 core (i7 7700K) easily destroys a slower 8 core (Ryzen 7 1700) will remain, and why overclocking can still matter (margins are smaller these days though). 


 


Edited by LucShep
  • Like 1

CGTC Caucasus retexture mod  |  A-10A cockpit retexture mod  |  Shadows reduced impact mod  |  DCS 2.5.6  (the best version for performance, VR or 2D)

DCS terrain modules_July23_27pc_ns.pngDCS aircraft modules_July23_27pc_ns.png  aka Luke Marqs; call sign "Ducko" =

Spoiler

Win10 Pro x64 | Intel i7 12700K (@5.1/5.0p + 3.9e) | 64GB DDR4 @3466 CL16 (Crucial Ballistix) | RTX 3090 24GB EVGA FTW3 Ultra | 2TB NVMe (MP600 Pro XT) + 500GB SSD (WD Blue) + 3TB HDD (Toshiba P300) + 1TB HDD (WD Blue) | Corsair RMX 850W | Asus Z690 TUF+ D4 | TR PA120SE | Fractal Meshify C | M-Audio USB + Sennheiser HD-599SE | 7x USB 3.0 Hub | 50'' 4K Philips 7608/12 UHD TV (+Head Tracking) | HP Reverb G1 Pro (VR) | TM Warthog + Logitech X56 

 

Link to comment
Share on other sites

3 minutes ago, LucShep said:

 


Just for the record, the way windows show CPU usage is most often not accurate. An application or game might use single core only and it will still show roughly even load across the board.

 

I can't recall how the special CPU core utilization method was called, but best source I find is here (at about middle of the blog article): 

http://blogs.technet.com/b/winserverperformance/archive/2009/08/06/interpreting-cpu-utilization-for-performance-analysis.aspx

 

The problem is that even single threaded application can be switched between cores constantly (and several times per second) so, for example, a six-core CPU at maximum single threaded load can show ~16.6% utilization of each core. 
And while it would be true that each core is doing proper work, it would be the same as if we got only one core (it's spreaded workload instead).

And of course it's possible for applications to override this and get specific core affinity (no idea if DCS does this and, if yes, to what extent) so it would show only two or three cores on 100% load. 
There is also CPU parking which puts some cores to sleep, to distort things even more.

 

Different CPU types have can show different utilization for the same software due to differences in architecture, so it's really hard to get some useful information from resource monitor CPU usage (such as Task Manager).


I've remember reading some comments here in the forums that DCS utilizes 2-3 cores max (I think physics aren't multi-threaded, but things like audio are likely running on a separate thread).
I really doubt anything has changed for latest 2.7 iteration.

 

Even in a world where 16 core processors will soon become mainstream, we'll still be stuck at strong(est) single-core performance when choosing a CPU for DCS, unfortunately.

Situations like a quick 4 core (i7 7700K) easily destroys a slower 8 core (Ryzen 7 1700) will remain, and why overclocking can still matter (margins are smaller these days though). 


 

 

 

I didn't use task manager.  I used HW monitor core usage, temperature, and clock.  And as I mentioned, there was definite performance difference between SMT on and off.  Suggesting more core improves performance in 2.7 whereas in 2.5, it made no difference.

Link to comment
Share on other sites

From most comments/feedback, DCS 2.7 is heavier on CPU (and GPU) than previous iteration(s) - and why people with mid-range and slower systems are getting the short end of the stick.

 

While HW monitor is far more accurate than Windows Task Manager, and considering you have one of the strongest single-threaded CPUs in current day (capable of dealing it), perhaps what you're seeing is higher CPU usage spreaded on more cores, due to how Windows spreads workload on cores (as said in previous post) regardless of single or multi threaded application?


Edited by LucShep

CGTC Caucasus retexture mod  |  A-10A cockpit retexture mod  |  Shadows reduced impact mod  |  DCS 2.5.6  (the best version for performance, VR or 2D)

DCS terrain modules_July23_27pc_ns.pngDCS aircraft modules_July23_27pc_ns.png  aka Luke Marqs; call sign "Ducko" =

Spoiler

Win10 Pro x64 | Intel i7 12700K (@5.1/5.0p + 3.9e) | 64GB DDR4 @3466 CL16 (Crucial Ballistix) | RTX 3090 24GB EVGA FTW3 Ultra | 2TB NVMe (MP600 Pro XT) + 500GB SSD (WD Blue) + 3TB HDD (Toshiba P300) + 1TB HDD (WD Blue) | Corsair RMX 850W | Asus Z690 TUF+ D4 | TR PA120SE | Fractal Meshify C | M-Audio USB + Sennheiser HD-599SE | 7x USB 3.0 Hub | 50'' 4K Philips 7608/12 UHD TV (+Head Tracking) | HP Reverb G1 Pro (VR) | TM Warthog + Logitech X56 

 

Link to comment
Share on other sites

I checked my usage using Ryzen Master recently, I'm on a 5600x with a 6800xt. On the Super Carrier with a full deck I saw two cores bouncing around 4ghz plus. The other four cores were bouncing between 1.5-2.5ghz. That was just monitoring it on auto nothing changed in the bios or RM.

The cores were not switched around either, stayed the same while I was using DCS.

Ryzen Master shows an easy to follow graph. Thats on DCS 2.7. Only really shared this as an example that the 5600x can handle DCS 2.7 on a flat panel 4k@60hz. I dont use VR so cant chip in regarding that.


Edited by Bossco82
  • Like 1
Link to comment
Share on other sites

This is taken using Afterburner as it shows the best in graph form.  First is VR, second is 2D.

 

VR.jpg

You can see that core 1 and 2 have completely different looking graph than any other core.  Suggesting they have their own task.  Cores 3 and 4 however are mirror images.  When core 3 is up, 4 is down, when 4 is up, 3 is down.  This suggests core 3 and 4 are on same task.  Core 5 and 6 are also mirrors.  Based on the track file I played, core 5 and 6 seems to respond whenever the plane maneuvers.  Core 11 seems to respond when I use weapon systems.  Core 9 when there's radio messages.  Of course these are my guesses based on what's happening in the game because we'll never know how it's distributed unless we have SDK.

 

Now Non-VR shows something different.

NoVR.jpg

Core 1 and 2 are mirrors.  And unlike VR mode, core 3 and 4 are virtually not used.  So we can assume core 3 and 4 are used for VR compositor.  And my guess is core 2 was used for extra draw call for VR right eye in VR mode.

 

So in Non-VR, I'm guessing that DCS uses minimum of 4 cores and turning SMT (or HT) on or off probably wouldn't affect performance.  But in VR, it uses more than 6 processes so using SMT/HT did show extra performance at least it was in my case.  If you have 8 or 12 physical core, it may not.  It all depends on hardware but I was just trying to show it uses more than just 2 cores.

 

Also if you are using 3rd party app like Voice Attack, they will likely use their own process.


Edited by Taz1004
  • Like 1
Link to comment
Share on other sites

Also wanted to show same test run on Assetto Corsa which is known to be multi-threaded.

 

This is Non-Vr.  Which pretty much shows same thing as DCS Non-VR.

ASC_NoVR.jpg

 

VR version below also pretty much same as DCS VR mode.  Just that Core 1 and core 5,6 are exchanged.

However, it showed something interesting.  I thought Cores 6, 7, 9, and 11 were not used as they showed virtually no usage.  Until an AI car came and slammed into me.  That's the small spike in the middle of the graph.  So it may appear that the cores are not being used until the specific event those cores are assigned to occur.

 

ASC_VR.jpg

 


Edited by Taz1004
  • Like 1
Link to comment
Share on other sites

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...