Jump to content

7800X3D, 7900X3D, 7950X3D..


Recommended Posts

vor 4 Stunden schrieb Hoirtel:

Good information, thanks. They have said that initial multicore will be implementing two cores but will MT take away the advantages of vcache?

The cache advantage will probably remain.
Similar to the flight simulator.

I5 13400F, 32GB DDR5 6200 CL30, RTX4070ti Super

2x 1tb m.2 (PCIe4.0)

Link to comment
Share on other sites

7 hours ago, derneuemann said:

If now the CPU only needs 7.7ms instead of 10 and the GPU 10ms.

Then you had an improvement of
1. 10+10ms = 50fps too
2. 7.7+10ms=56.5fps

or at high settings
1. 15+15ms = 33 fps too
2. 11.5+15ms = 38fps.

However, if you have set DCS in such a way that you have reached around 72fps with low to medium settings, it should look like this.
CPU at 8ms and the GPU at 6ms.

1. 8+6ms = 71.4fps to
2. 6.1+6ms=82.6fps

Are you sure the total frametime is the sum of GPU+CPU?

As far as I'm aware, and the following browser-simulation of frametimes shows, CPUs and GPUs (almost*) work in parallel:

http://www.frametime.tech/

(language/"sprache" can be set to English, hint: check "Directly change frametimes" to manually set CPU/GPU frametimes and ignore the other settings for fast success.)

so when you have 10ms CPU and 10ms GPU the result is not 20ms (50 fps, as you mentioned) but 10ms (+ 1ms offset*) so round about 91 fps.

(This is what I see in every frametime benchmark with DCS btw.)


Edited by Tom Kazansky
Link to comment
Share on other sites

On 2/27/2023 at 9:53 AM, ironhard said:

Perhaps more interesting would be XP12 Vulkan benchmarks than DX11. I wouldn't plan a systems upgrade based on DX11, specially since 5800X3D is still very relevant.

Paul's hardware also shows an advantage in MSFS at 4k for the 7950x3d but again only dx11. 


Edited by skypickle

4930K @ 4.5, 32g ram, TitanPascal

Link to comment
Share on other sites

38 minutes ago, Hoirtel said:

Interesting video on how 7950X3D works. 

I love Wendell, he's very knowledgeable and is able to impart that knowledge effectively. Nothing flashy, just good solid info delivered by someone who actually knows how things work.

  • Like 1

Windows 11 | ASUS B650E-F STRIX | AMD 7800X3D | G.Skill 64Gb DDR5 6200 30-36-36-48 w/ tuned secondary/tertiary | RTX 4090 undervolted curve | MSI MPG A1000G PSU | VKB MCG Gunfighter Ultimate + Rudder Pedals + WH Throttle |  HP Reverb G2

Link to comment
Share on other sites

On 2/28/2023 at 5:30 PM, Hoirtel said:

I hope that a DCS player has managed to get one of the 7950X3Ds, and is happy to post results here!

Purchased a 7950X3D that will be arriving sometime next week, and will do the transplant/build next weekend. I plan on posting few benchmarks for you guys to see the performance gains compared to 5800X, as well as look at how multithreading improves things when it gets released.

  • Like 6

AMD Ryzen 9 7950X3D | ASRock X670E Steel Legend | 64GB (2x32GB) G.Skill Trident Z5 DDR5-6000MHz CL32 | XFX RX 7900 XTX Merc 310 24GB GDDR6 | Samsung 970 EVO Plus 2TB NVMe | Corsair HX1000i 1000W 80+ Platinum (2022) | Meta Quest 3 512GB | Dell S3422DWG 34" 144Hz UWQHD (3440x1440) | VPC MongoosT-50CM2 Base & Grip with 200mm VPC Flightstick Extension | VPC MongoosT-50CM3 Throttle | VPC ACE Collection Rudder Pedals | VPC Control Panel #2 & VPC SharKa-50 Control Panel

Link to comment
Share on other sites

17 hours ago, derneuemann said:

 

If now the CPU only needs 7.7ms instead of 10 and the GPU 10ms.

Then you had an improvement of
1. 10+10ms = 50fps too
2. 7.7+10ms=56.5fps

or at high settings
1. 15+15ms = 33 fps too
2. 11.5+15ms = 38fps.

However, if you have set DCS in such a way that you have reached around 72fps with low to medium settings, it should look like this.
CPU at 8ms and the GPU at 6ms.

1. 8+6ms = 71.4fps to
2. 6.1+6ms=82.6fps

You just have to see where you are, what you want and what it can do best.

Wrong calculations, you dont have to sum CPU and GPU to take the FPS, you have to take just the higher one.

Frametime is the time needed by the GPU/CPU to finish the frame, GPU has to complete X work and CPU Y work to show the frame on screen, both start at the same time, not one after the other:

To render X frame, GPU needs 10ms and CPU 7ms, both start at the same time and, once CPU finish, it will wait until GPU finish (3ms) to work on the next frame, in this case your "global" frametime is 10ms and you get 100 fps.

If your CPU finish before your GPU, you are GPU limited, if your GPU finish before your CPU, you are CPU limited

  • Like 2

NZXT H9 Flow Black | Intel Core i5 13600KF OCed P5.6 E4.4 | Gigabyte Z790 Aorus Elite AX | G.Skill Trident Z5 Neo DDR5-6000 32GB C30 OCed 6600 C32 | nVidia GeForce RTX 4090 Founders Edition |  Western Digital SN770 2TB | Gigabyte GP-UD1000GM PG5 ATX 3.0 1000W | SteelSeries Apex 7 | Razer Viper Mini | SteelSeries Artics Nova 7 | LG OLED42C2 | Xiaomi P1 55"

Virpil T-50 CM2 Base + Thrustmaster Warthog Stick | WinWing Orion 2 F16EX Viper Throttle  | WinWing ICP | 3 x Thrustmaster MFD | Saitek Combat Rudder Pedals | Oculus Quest 2

DCS World | Persian Gulf | Syria | Flaming Cliff 3 | P-51D Mustang | Spitfire LF Mk. IX | Fw-109 A-8 | A-10C II Tank Killer | F/A-18C Hornet | F-14B Tomcat | F-16C Viper | F-15E Strike Eagle | M2000C | Ka-50 BlackShark III | Mi-24P Hind | AH-64D Apache | SuperCarrier

Link to comment
Share on other sites

What worries me is the RAM support for the 9000 series.

It looks more flexible with the possibility to use 4 X 2 ranks per stick but in a 4 X sticks configuration, the 9 7900 is limited to DDR5-3600, but if one really wants to take advantage of the cache, you'll need a hell of a high frequency and if possible, interleaving as well.

The whole idea of the cache was low latency, with DDR5 it can be compensated with higher frequencies but this frequency range is not it yet, so unless AMD allows their CPU controllers to support faster RAM in 4 X Stick configuration, the gain will remain limited.

At least we don't have this issue with the 5800X3D, with Cl 14 the gain is very noticeable, low latency, 3600MHz, interleaving...

it would be much better if the CPU designed to take DDR5 could see a similar RAM bounding made possible.

 


Edited by Thinder

Win 11Pro. Corsair RM1000X PSU. ASUS TUF Gaming X570-PLUS [WI-FI], AMD Ryzen 7 5800X 3D, Sapphire Radeon RX 7900 XTX Nitro+ Vapor-X 24GB GDDR6. 32 GB G.SKILL TridentZ RGB Series (4 x 8GB) RAM Cl14 DDR4 3600. Thrustmaster HOTAS WARTHOG Thrustmaster. TWCS Throttle. PICO 4 256GB.

WARNING: Message from AMD: Windows Automatic Update may have replaced their driver by one of their own. Check your drivers.

M-2000C. Mirage F1. F/A-18C Hornet. F-15C. F-5E Tiger II. MiG-29 "Fulcrum".  Avatar: Escadron de Chasse 3/3 Ardennes. Fly like a Maineyak.

 

Link to comment
Share on other sites

Obviously I won't know until I can test this, but I have a 7900x3D arriving Friday (full build specs in signature) and will post results here and probably a full post.

Currently running a 5900x w/3090 and will do a benchmark on that system first and attempt to replicate settings on the 7900x3D/4090 build to give some idea of what to expect from the duo upgrade for VR users (I'm on G2)

Really had to think about which one to get 7900x3D v 7950x3D.

My thought process goes like this.

In the past, the non 3D bins 5900x for example would do better for DCS and other single threaded CPU bound applications than the 5950x because the base clock speed is slightly higher.

On top of this, it has been confirmed that the 7900x3D is a 6 + 6 design meaning that the 6 cores have more Cache available to them than the 8 cores would if it were an 8 core design.

DCS won't use more than 6 cores even with vulkan and "multi-threading" upgrades anyway so I'm safe here. By the time that arrives I'm on AM5 and will probably have either upgraded CPU to 8000/9000 or on AM10 by then. lol

Either way, parts should all be here by next week and I will take some time to build it out and test it to give some results here.

my 5900x/3090 build is severely CPU bound which is why I had not upgraded to a 4090 yet. Didn't see the point. Hopefully the 7900x3D can push enough draw calls to the 4090 to see some nice gains. We'll see.

Will keep you posted

AMD 7900x3D | Asus ROG Crosshair X670E Hero | 64GB DC DDR5 6400 Ram | MSI Suprim RTX 4090 Liquid X | 2 x Kingston Fury 4TB Gen4 NVME | Corsair HX1500i PSU | NZXT H7 Flow | Liquid Cooled CPU & GPU | HP Reverb G2 | LG 48" 4K OLED | Winwing HOTAS

Link to comment
Share on other sites

13 minutes ago, trevoC said:

On top of this, it has been confirmed that the 7900x3D is a 6 + 6 design meaning that the 6 cores have more Cache available to them than the 8 cores would if it were an 8 core design

That's not how it works, all 6 or 8 cores from that CCD have equal access to all that extra L3 cache on top of it.


Edited by some1

Hardware: VPForce Rhino, FSSB R3 Ultra, Virpil WarBRD, Hotas Warthog, Winwing F15EX, Slaw Rudder, GVL224 Trio Throttle, Thrustmaster MFDs, Saitek Trim wheel, Trackir 5, Quest Pro

Link to comment
Share on other sites

vor 5 Stunden schrieb 5ephir0th:

Wrong calculations, you dont have to sum CPU and GPU to take the FPS, you have to take just the higher one.

Frametime is the time needed by the GPU/CPU to finish the frame, GPU has to complete X work and CPU Y work to show the frame on screen, both start at the same time, not one after the other:

To render X frame, GPU needs 10ms and CPU 7ms, both start at the same time and, once CPU finish, it will wait until GPU finish (3ms) to work on the next frame, in this case your "global" frametime is 10ms and you get 100 fps.

If your CPU finish before your GPU, you are GPU limited, if your GPU finish before your CPU, you are CPU limited

 

But the GPU needs information from the CPU for the image. So that it's not 1+1 is ok, but that it doesn't build on each other can't be right either.

If you're sure about that, thanks for the tip!

 

That also means that an R5 5600 is actually enough in 9 out of 10 cases.

Only with everything on Ultra, including traffic and such, does the 5600 drop to over 10-13ms. Bad servers or bad software ensure that it's not 10 out of 10 cases.


Edited by derneuemann

I5 13400F, 32GB DDR5 6200 CL30, RTX4070ti Super

2x 1tb m.2 (PCIe4.0)

Link to comment
Share on other sites

2 hours ago, Thinder said:

What worries me is the RAM support for the 9000 series.

It looks more flexible with the possibility to use 4 X 2 ranks per stick but in a 4 X sticks configuration, the 9 7900 is limited to DDR5-3600, but if one really wants to take advantage of the cache, you'll need a hell of a high frequency and if possible, interleaving as well.

The whole idea of the cache was low latency, with DDR5 it can be compensated with higher frequencies but this frequency range is not it yet, so unless AMD allows their CPU controllers to support faster RAM in 4 X Stick configuration, the gain will remain limited.

At least we don't have this issue with the 5800X3D, with Cl 14 the gain is very noticeable, low latency, 3600MHz, interleaving...

it would be much better if the CPU designed to take DDR5 could see a similar RAM bounding made possible.

 

 

9000 series? I assume you mean 7000x3d series? Seems reviewers use the same spec ram as non 3d i.e 6000 with as low latency as they can. Seems to work fine??

What I would be interested to know is if these are as sensitive as the non 3d parts as I understand the 5800x3d isnt as memory sensitive as the non 3d parts.

Would be be able to buy 6000 cl40 and get same performance? I'm sure a reviewer will test this soon. Probably HUB. 


Edited by Hoirtel
Link to comment
Share on other sites

1 hour ago, derneuemann said:

But the GPU needs information from the CPU for the image. So that it's not 1+1 is ok, but that it doesn't build on each other can't be right either.

If you're sure about that, thanks for the tip!

If you want to see how it works just go to

http://www.frametime.tech/

No installation, no cookie terror, just an easy way to learn about what happens with frame times and FPS like I mentioned in my post above:

17 hours ago, Tom Kazansky said:

Are you sure the total frametime is the sum of GPU+CPU?

As far as I'm aware, and the following browser-simulation of frametimes shows, CPUs and GPUs (almost*) work in parallel:

http://www.frametime.tech/

(language/"sprache" can be set to English, hint: check "Directly change frametimes" to manually set CPU/GPU frametimes and ignore the other settings for fast success.)

so when you have 10ms CPU and 10ms GPU the result is not 20ms (50 fps, as you mentioned) but 10ms (+ 1ms offset*) so round about 91 fps.

(This is what I see in every frametime benchmark with DCS btw.)

 

 

  • Like 1
Link to comment
Share on other sites

vor einer Stunde schrieb Tom Kazansky:

If you want to see how it works just go to

http://www.frametime.tech/

No installation, no cookie terror, just an easy way to learn about what happens with frame times and FPS like I mentioned in my post above:

 

Looks as if the transfer of the scene from CPU to GPU can also take up to 5-6ms at DCS ...

But thanks, it was very interesting and something again

I5 13400F, 32GB DDR5 6200 CL30, RTX4070ti Super

2x 1tb m.2 (PCIe4.0)

Link to comment
Share on other sites

Should answer some questions on reliance on RAM of the X3D chips. 

Also some suggested that RAM is limited to DDR5 3600 in 4x stick config. This isn't accurate, at least not in practice. 4x16Gb (these are single rank) works just fine at 6000 mt/s. 

As for 7900X3D, further suggests that it mostly exists to use up rejected dies that couldn't have 8 cores. 

 


Edited by EightyDuce

Windows 11 | ASUS B650E-F STRIX | AMD 7800X3D | G.Skill 64Gb DDR5 6200 30-36-36-48 w/ tuned secondary/tertiary | RTX 4090 undervolted curve | MSI MPG A1000G PSU | VKB MCG Gunfighter Ultimate + Rudder Pedals + WH Throttle |  HP Reverb G2

Link to comment
Share on other sites

5 hours ago, Hoirtel said:

9000 series? I assume you mean 7000x3d series? Seems reviewers use the same spec ram as non 3d i.e 6000 with as low latency as they can. Seems to work fine??

9 7950X3D and the rest of them running of DDR5...

 

Quote

What I would be interested to know is if these are as sensitive as the non 3d parts as I understand the 5800x3d isnt as memory sensitive as the non 3d parts.

 

It's the opposite. Since I upgraded from one to the other, I had the opportunity to test them back to back with 3600 MHz RAM kit, using the same settings in testing with 3DMark Pro, the 7 5800X 3D run circles around the 5600X, results with Cl14 and 4 X 1 rank for the 5600X are good but they are nothing short of impressive in the case of the 5800X 3D.

Percentages of scores are computed comparing to the previous RAM kit or CPU.

5600X. 32GB of Cl14 3200MHz. 4 X 1 rank.

GSKILL.jpg

5800X 3D 32GB of Cl14 3600NHz. 4 X 1 rank. Improvements over the 5600X.

Gains-Stage-1.jpg

So obviously the combination of 96MB L3 Cache and Cl14 RAM (plus interleaving in the case of a 4 X 1 rank kit) is a lot more efficient than Cl14 on its own with 5600X upgraded from a Cl16 kit. The 5600X clocks faster at 4.7GHz vs 4.5GHz and during the test was boosted with Ryzen Master, so was the GPU (EVGA 1080Ti) with Afterburner, no boost for the upgraded configuration.

Note than instead of O-Cing my RAM, I upgraded it to a 64GB kit, Cl14 3600MHz 4 X 1 rank, same manufacturer, so when they said lower latency was the goal of the chache on the 5800X 3D video, it really was a clue.

 

 


Edited by Thinder

Win 11Pro. Corsair RM1000X PSU. ASUS TUF Gaming X570-PLUS [WI-FI], AMD Ryzen 7 5800X 3D, Sapphire Radeon RX 7900 XTX Nitro+ Vapor-X 24GB GDDR6. 32 GB G.SKILL TridentZ RGB Series (4 x 8GB) RAM Cl14 DDR4 3600. Thrustmaster HOTAS WARTHOG Thrustmaster. TWCS Throttle. PICO 4 256GB.

WARNING: Message from AMD: Windows Automatic Update may have replaced their driver by one of their own. Check your drivers.

M-2000C. Mirage F1. F/A-18C Hornet. F-15C. F-5E Tiger II. MiG-29 "Fulcrum".  Avatar: Escadron de Chasse 3/3 Ardennes. Fly like a Maineyak.

 

Link to comment
Share on other sites

1 hour ago, EightyDuce said:

Should answer some questions on reliance on RAM of the X3D chips. 

Also some suggested that RAM is limited to DDR5 3600 in 4x stick config. This isn't accurate, at least not in practice. 4x16Gb (these are single rank) works just fine at 6000 mt/s. 

As for 7900X3D, further suggests that it mostly exists to use up rejected dies that couldn't have 8 cores. 

 

 

Just watched the HUB video, pretty good. Seems to suggest its not quite a sensitive but buying better is advisable. Now I have a problem as there are still no 64gb kits available at any less than C40 for 6000, but I can get a great 6000 32Gb kit at C30, hmmmmmm

Going to watch the 7900 video.

Link to comment
Share on other sites

Here at 5:37, they explain why they developed the 3D cache and it's all about access time, otherwise said; latency.

That's why I have a problem with the DDR5 thing, there isn't a single RAM manufacturer who came up with the equivalent of B.Die and their chips cannot run stable at lower latency (Cl16) and high frequencies together, so I'll seat this one out and wait until the technology is there for other purpose than financing their R&D, because right now, it is what it going on.

Win 11Pro. Corsair RM1000X PSU. ASUS TUF Gaming X570-PLUS [WI-FI], AMD Ryzen 7 5800X 3D, Sapphire Radeon RX 7900 XTX Nitro+ Vapor-X 24GB GDDR6. 32 GB G.SKILL TridentZ RGB Series (4 x 8GB) RAM Cl14 DDR4 3600. Thrustmaster HOTAS WARTHOG Thrustmaster. TWCS Throttle. PICO 4 256GB.

WARNING: Message from AMD: Windows Automatic Update may have replaced their driver by one of their own. Check your drivers.

M-2000C. Mirage F1. F/A-18C Hornet. F-15C. F-5E Tiger II. MiG-29 "Fulcrum".  Avatar: Escadron de Chasse 3/3 Ardennes. Fly like a Maineyak.

 

Link to comment
Share on other sites

8 minutes ago, Thinder said:

Here at 5:37, they explain why they developed the 3D cache and it's all about access time, otherwise said; latency.

That's why I have a problem with the DDR5 thing, there isn't a single RAM manufacturer who came up with the equivalent of B.Die and their chips cannot run stable at lower latency (Cl16) and high frequencies together, so I'll seat this one out and wait until the technology is there for other purpose than financing their R&D, because right now, it is what it going on.

You are right there is no CL16 DDR5. 

What you can't seem to understand that doesn't have as much an impact as you think. Because zen4 X3D and non X3D on DDR5 6000 CL 32/30 are spanking their Zen3 counterparts on DDR4 with CL16/14 RAM due to sheer bandwith. 

What is equivent to B. Die anyway? Hynix A/M dies run circles around Samsung Bdie DDR4 in bandwith. 

  • Like 1

Windows 11 | ASUS B650E-F STRIX | AMD 7800X3D | G.Skill 64Gb DDR5 6200 30-36-36-48 w/ tuned secondary/tertiary | RTX 4090 undervolted curve | MSI MPG A1000G PSU | VKB MCG Gunfighter Ultimate + Rudder Pedals + WH Throttle |  HP Reverb G2

Link to comment
Share on other sites

31 minutes ago, Hoirtel said:

Just watched the HUB video, pretty good. Seems to suggest its not quite a sensitive but buying better is advisable. Now I have a problem as there are still no 64gb kits available at any less than C40 for 6000, but I can get a great 6000 32Gb kit at C30, hmmmmmm

Going to watch the 7900 video.

Is that a location thing? 

 

Because there are plenty of CL30 and CL32 64GB kits available, at least here in the US. 

Also depending on what CL40 RAM is, Kingston for example, could actually be Hynix M.die and should be able to hit CL32, and more importantly, tighten up secondary and tertiary timings for a significant boost in performance. 

 

 

Windows 11 | ASUS B650E-F STRIX | AMD 7800X3D | G.Skill 64Gb DDR5 6200 30-36-36-48 w/ tuned secondary/tertiary | RTX 4090 undervolted curve | MSI MPG A1000G PSU | VKB MCG Gunfighter Ultimate + Rudder Pedals + WH Throttle |  HP Reverb G2

Link to comment
Share on other sites

2 hours ago, EightyDuce said:

Is that a location thing? 

 

Because there are plenty of CL30 and CL32 64GB kits available, at least here in the US. 

Also depending on what CL40 RAM is, Kingston for example, could actually be Hynix M.die and should be able to hit CL32, and more importantly, tighten up secondary and tertiary timings for a significant boost in performance. 

 

 

Yes it is a location thing, anything else I see is all US imports which is really expensive. And to add insult the C40 kits are corsair, which at that latency will be Samsung. I can get a Kingston beast 64gb 5600 CAS 40-40-40-77. I think it will be worth getting 32gb and waiting.

When I add the above into the asus QVL list it does say SK Hynix.....

The other thing is that I have 64Gb DDR4 now, and haven't seen it go higher than 28


Edited by Hoirtel
Link to comment
Share on other sites

4 hours ago, EightyDuce said:

Should answer some questions on reliance on RAM of the X3D chips. 

Also some suggested that RAM is limited to DDR5 3600 in 4x stick config. This isn't accurate, at least not in practice. 4x16Gb (these are single rank) works just fine at 6000 mt/s. 

As for 7900X3D, further suggests that it mostly exists to use up rejected dies that couldn't have 8 cores.

 

 

So the GN review is a bit damning of 7900X3D. They also mention about the chance for reduced clocks for 7800X3D, I am not sure AMD will let that be higher up than either of those two so the 7950X3D may still be the best performer of the three, and the 7800X3D probably won't compete with the 13700K 

Link to comment
Share on other sites

Just got back from a business trip late last night and had the new 7950X3D waiting on me.  Hope to do the build today.  Going to pair with a Crosshair Hero X670E, 64GB of Trident Z DDR5 CL30 6000 EXPO kit, and a RTX4090FE...
IMG_0301.jpg


Edited by Greekbull
  • Like 6

AMD Ryzen 9 7950X3D | ASUS Crosshair Hero X670E | 64GB G Skill Trident Z DDR5 6000 | Nvidia RTX 4090 FE| Samsung EVO Plus 6 TB M.2 PCIe SSDs | TM Hornet Stick/WinWing Hornet Throttle and MIP | VPC T-50 Stick Base | TM TPR Rudder Pedals W/Damper | Varjo Aero/Pimax Crystal

VFA-25 Fist of the Fleet

Carrier Strike Group One(CSG-1) Discord
 

Link to comment
Share on other sites

On 3/2/2023 at 2:20 PM, EightyDuce said:

You are right there is no CL16 DDR5. 

What you can't seem to understand that doesn't have as much an impact as you think. Because zen4 X3D and non X3D on DDR5 6000 CL 32/30 are spanking their Zen3 counterparts on DDR4 with CL16/14 RAM due to sheer bandwith. 

What is equivent to B. Die anyway? Hynix A/M dies run circles around Samsung Bdie DDR4 in bandwith. 

That's the thing about it. The CL value is not an absolute number. It's comparing apples and oranges.

RL = CL *2000 / DR

Where:

RL = Real latency in nanoseconds

DR = Data rate in Mhz

So, for a 3000Mhz CL16 vs 6000Mhz CL30:

3000Mhz CL16 = 10.66667 nanoseconds

6000Mhz CL30 = 10 nanoseconds

 

https://www.teamgroupinc.com/en/blogs/memory-latency-en


Edited by ironhard
  • Thanks 1
Link to comment
Share on other sites

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...