Jump to content

DCS and SSD speed


Recommended Posts

Hello,

 

I've been experimenting with a couple different SSDs in the hope of reducing the load times for DCS, especially in complex missions with the F-14.

 

A little about my system;

Windows 10-64, 64GB ram, i7-6700 (3.4GHz), GTX 1080 8GB, and a Samsung 850 evo SSD boot drive.

 

I run DCS off an M.2 Samsung 860 evo which gets similar Crystal benchmarks as the 850 evo (550 Mb/s sequential read/write, 40-120 Mb/sec for random reads and writes).

 

I've also tried running DCS off a Samsung M.2 970 pro connected with a PCIe M.2 adapter, which benchmarks about 6x faster than the 850/860 evos for sequential reads and writes, but about the same for random reads and writes.

 

When I launch DCS, I didn't really notice a 6x reduction in wait times, nor did I ever see the disk reads exceed about 100 Mb/s in task manager. This makes me think that even a regular SSD exceeds the performance that DCS appears to require when launching (or I'm interpreting the results incorrectly).

 

One anecdotal improvement I saw was panning around in external view seemed significantly smoother on the 960 pro than the others, but it could have been the relatively simple mission I had created for testing.

 

A few questions;

 

What are some good guidelines for DCS disk performance? Is faster better or is there a limit to how fast DCS requests data?

 

Is a 970 pro with 3350 MB/s sequential read/write times just plain overkill for DCS.

Would I be better served by making the 970 pro my boot drive and running DCS on a partition of same drive?

 

I use Crystal benchmark for SSD speed tests, but I'd be interested in benchmarking real DCS performance if there is a better tool than using the task manager while DCS is running.

 

Any suggestions are greatly appreciated.

Windows 10 Pro 64-bit
i9-9900k 3.6 GHz, 64 GB 3200 MHz ram
ASUS ROG Strix Z390-E Gaming
RTX 4090, Samsung 970 pro NVMe

Link to comment
Share on other sites

I have seen some test results here on this forum which indicate similar results between SATA SSD and M.2 SSD. The difference in load times was small.

 

I have done similar tests with SATA SSD and Intel PCIe 750 series SSD and there wasn't much in difference in load times, I also don't see any significant disk access once DCS is loaded and running. My thoughts are as DCS launches it's reading data from disc and creating the required data structures in RAM and VRAM using that data so SSD speed becomes secondary as the CPU is doing a lot of work during the load process.

 

You could try MSi afterburner it allows you to monitor many aspects of DCS.

 

For DCS the performance triangle is CPU clock, RAM speed and GPU, is at the moment if I read between the lines the high performance rigs sit at around the 5Ghz CPU clock, >3000Mhz RAM and 2080Ti which is pretty much a formula build.

 

Rift VR

DCS local SP basic mission

RAM about ~4 GB, VRAM ~6GB

DCS online MP basic aerobatics server

RAM about ~6GB, VRAM ~10.7GB

Control is an illusion which usually shatters at the least expected moment.

Gazelle Mini-gun version is endorphins with rotors. See above.

 

Currently rolling with a Asus Z390 Prime, 9600K, 32GB RAM, SSD, 2080Ti and Windows 10Pro, Rift CV1. bu0836x and Scratch Built Pedals, Collective and Cyclic.

Link to comment
Share on other sites

I have tested this with a new 500gb 860 evo in late 2018 for DCS and a bunch of other games. I don't have a nvme drive to compare it to, but the read rates of the 860 pretty much never exceeded 350mbps, and that only for a split second, with anything in between and large junks of time during the loading process of very minuscule readspeeds. So i tend to agree that generally more than a normal ssd aint needed...

Link to comment
Share on other sites

Duno about FSB, maybe? However there seems to be a fair amount of CPU usage and data moved around during start up and opening a mission etc.. I am running the "stable" version currently for testing.

 

Single player restart DCS Caucasus map including Oculus Rift.

 

DCS start up including Rift VR about 6 seconds. up to 3GB RAM and 1.5GB VRAM

 

Open simple free flight mission about 10 seconds. up to 8.7GB RAM and up to 4.2GB VRAM

 

Choose role in about 5 seconds. up to 11.1GB RAM and 6.9GB VRAM

 

In module in about a Second, down to 9.6GB RAM and 6.9GB VRAM

Control is an illusion which usually shatters at the least expected moment.

Gazelle Mini-gun version is endorphins with rotors. See above.

 

Currently rolling with a Asus Z390 Prime, 9600K, 32GB RAM, SSD, 2080Ti and Windows 10Pro, Rift CV1. bu0836x and Scratch Built Pedals, Collective and Cyclic.

Link to comment
Share on other sites

Wanna noticeable improvment? Try RAID-1 SSD

 

Raid 1 would do nothing since it's just a mirror. I think you meant raid0 which stripes against two disks. But for SSD, it doesn't make much sense even if using raid0. The access isn't going to be the bottleneck given the SSD vs NMVe testing. For traditional HD I can see the benefit but would never do it due to doubling the risk of data loss.

hsb

HW Spec in Spoiler

---

 

i7-10700K Direct-To-Die/OC'ed to 5.1GHz, MSI Z490 MB, 32GB DDR4 3200MHz, EVGA 2080 Ti FTW3, NVMe+SSD, Win 10 x64 Pro, MFG, Warthog, TM MFDs, Komodo Huey set, Rverbe G1

 

Link to comment
Share on other sites

Raid 1 would do nothing since it's just a mirror. I think you meant raid0 which stripes against two disks. But for SSD, it doesn't make much sense even if using raid0. The access isn't going to be the bottleneck given the SSD vs NMVe testing. For traditional HD I can see the benefit but would never do it due to doubling the risk of data loss.

 

As far as I'm a where Raid0 and Raid1 works the same while reading... It's when writing it's different

Link to comment
Share on other sites

Try... after it you willl surely want to remove your comment

 

Well, I suppose in theory, a proper RAID controller with the smarts *could* use both disks to read and use the faster of the two. But with SSDs, it's a moot point. Even with mag HDD, all the stars would have to line up for READ operation to be better. Meaning on one drive, it used the inner tracks and the other, the outer tracks.

 

So w/o some facts, I'll chalk it up to placebo effect. If you've done some testing, do let us know.

hsb

HW Spec in Spoiler

---

 

i7-10700K Direct-To-Die/OC'ed to 5.1GHz, MSI Z490 MB, 32GB DDR4 3200MHz, EVGA 2080 Ti FTW3, NVMe+SSD, Win 10 x64 Pro, MFG, Warthog, TM MFDs, Komodo Huey set, Rverbe G1

 

Link to comment
Share on other sites

Even with rotating hdd the resulting read troughput is about twice than a single drive setup

This is basic admin knowledge, try it or not it's up to you

 

Again, basic admin knowledge would mean that you're talking about RAID0 and not RAID1. But, no skin off my back. I just don't want someone following your advice of duplicating the HD.

hsb

HW Spec in Spoiler

---

 

i7-10700K Direct-To-Die/OC'ed to 5.1GHz, MSI Z490 MB, 32GB DDR4 3200MHz, EVGA 2080 Ti FTW3, NVMe+SSD, Win 10 x64 Pro, MFG, Warthog, TM MFDs, Komodo Huey set, Rverbe G1

 

Link to comment
Share on other sites

Google is a better friend than me

 

Ask it for some enlightenment

 

 

ugh. well at least your company servers are protected with raid1 mirroring. And if you think it's reading faster, who am I to argue.

hsb

HW Spec in Spoiler

---

 

i7-10700K Direct-To-Die/OC'ed to 5.1GHz, MSI Z490 MB, 32GB DDR4 3200MHz, EVGA 2080 Ti FTW3, NVMe+SSD, Win 10 x64 Pro, MFG, Warthog, TM MFDs, Komodo Huey set, Rverbe G1

 

Link to comment
Share on other sites

As long as the RAID 1 controller uses multiplexing, yes (writing can be faster) but many use the cheap controller (fake RAID :lol:) on motherboard... Are you sure you need to RAID the SSDs to play adequately? :yawn:


Edited by Demon_

Attache ta tuque avec d'la broche.

Link to comment
Share on other sites

Yes sure... I didn't realized I was unpolite so please forgive that.

Btw RAID1 is reading/writing SAME data from/to several sources/dest, is this clear for you ?

So, when you write data to the filesystem it must write the SAME data in to several disks. Yes this is right, this process is more time/ressources consuming.

BUT when you are reading data. SAME data block is read from several places in the SAME time and the thus speed/troughput is twice, triple depending number of hard drive.

What can I say more?? I cannot afford to build test lab in the only purpose of convincing people.

Definitely Google will help with this, and again, sorry if this hurting your ears/eyes.

Link to comment
Share on other sites

I ran some fairly unscientific tests with two SSDs on my system.

 

As a reminder, here are my system specs

Windows 10 Pro 64-bit

i7-6700 3.4 GHz

64 GB 2333 MHz ram

GTX 1080 (8GB)

boot drive is a Samsung 850 evo

 

S:\ is Samsung 860 evo M.2 on the motherboard (my longstanding DCS install)

Z:\ is a Samsung 970 pro in an M.2 PCIe adapter, which holds a copy of my DCS install from the S:\

 

latest open beta of DCS World (as of March 28 2019). The mission I chose was a simple battle mission that my friends and I play fairly often. All data here was recorded running this same mission in single player mode offline.

 

Time T = 00:00 is when the first sign of the DCS login window appeared

The next time of note was the when the Main Menu appears

The next time of note was when the "Start" prompt appears after selecting a mission

The next time of note was when the "Choose Coalition" prompt appears

The last time of note is when the "Welcome to Simple Battle" message appears in the mission

 

Here is the normalized data (T = 00:00 at appearance of login window)

 

0xgcp5q.jpg

 

and a graph representation of the data

 

rEUov50.jpg

 

What's interesting is the PCIe adapted M.2 970 pro takes longer to launch DCS and be ready in the cockpit than the motherboard mounted 860 evo M.2, despite having about 6x faster sequential read benchmarks compared with the 860 evo.

 

The "reboot" launch times were recorded after a fresh Windows reboot.

the "second" launch times were after having run DCS and not rebooting in between launches.

the "launch" launch times were after having run DCS several times from both SSDs.

 

These results were very surprising to me. Maybe the 970 pro is better suited for a boot drive or some other line of work. I have not had the time to repeat these tests under the same conditions to get a sense for repeatability, so take these data for what they're worth.

Windows 10 Pro 64-bit
i9-9900k 3.6 GHz, 64 GB 3200 MHz ram
ASUS ROG Strix Z390-E Gaming
RTX 4090, Samsung 970 pro NVMe

Link to comment
Share on other sites

Why is there no PCIe M2 slot on your motherboard ? You have a 6700k/Z170 combo and that usually has one ready to use, or two of them.

 

Either way, something must be wrong as all other tests on the net tell us how much faster a NVMe based SSD is over a Sata based one. Somewhere is a mishap.

Gigabyte Aorus X570S Master - Ryzen 5900X - Gskill 64GB 3200/CL14@3600/CL14 - Asus 1080ti EK-waterblock - 4x Samsung 980Pro 1TB - 1x Samsung 870 Evo 1TB - 1x SanDisc 120GB SSD - Heatkiller IV - MoRa3-360LT@9x120mm Noctua F12 - Corsair AXi-1200 - TiR5-Pro - Warthog Hotas - Saitek Combat Pedals - Asus PG278Q 27" QHD Gsync 144Hz - Corsair K70 RGB Pro - Win11 Pro/Linux - Phanteks Evolv-X 

Link to comment
Share on other sites

Yes sure... I didn't realized I was unpolite so please forgive that.

Btw RAID1 is reading/writing SAME data from/to several sources/dest, is this clear for you ?

So, when you write data to the filesystem it must write the SAME data in to several disks. Yes this is right, this process is more time/ressources consuming.

BUT when you are reading data. SAME data block is read from several places in the SAME time and the thus speed/troughput is twice, triple depending number of hard drive.

What can I say more?? I cannot afford to build test lab in the only purpose of convincing people.

Definitely Google will help with this, and again, sorry if this hurting your ears/eyes.

 

Instead of google you should buy some Adaptec/LSI from ebay and experiment.

 

You will then see yourself that Raid-1 has a few advantages over all other Raid levels but the symptom you describe is almost not present in real life.

 

What makes R1 shine is the access latency, it is as agood as 1 single drive, no other Raid can match that, in fact, nothingbeats a single drive but a Raid-1 if the platters are luckily aligned ( HDD only ).

 

You will not notice ANY improvement in I/O with R1 vs a sinle drive in a mixed world scenasrio.

Gigabyte Aorus X570S Master - Ryzen 5900X - Gskill 64GB 3200/CL14@3600/CL14 - Asus 1080ti EK-waterblock - 4x Samsung 980Pro 1TB - 1x Samsung 870 Evo 1TB - 1x SanDisc 120GB SSD - Heatkiller IV - MoRa3-360LT@9x120mm Noctua F12 - Corsair AXi-1200 - TiR5-Pro - Warthog Hotas - Saitek Combat Pedals - Asus PG278Q 27" QHD Gsync 144Hz - Corsair K70 RGB Pro - Win11 Pro/Linux - Phanteks Evolv-X 

Link to comment
Share on other sites

This is how my motherboard manual describes the M.2 slot where I had my S:\ 860 evo installed

 

 

1 x M.2 Socket 3 with M Key, type 2242/2260/2280/22110 storage devices

support (both SATA & PCIE 3.0 x4 mode)

 

and how each SSD benchmarks

 

S:\ Samsung 860 evo M.2 (on motherboard slot)

-----------------------------------------------------------------------

CrystalDiskMark 5.5.0 x64 © 2007-2017 hiyohiyo

Crystal Dew World : http://crystalmark.info/

-----------------------------------------------------------------------

* MB/s = 1,000,000 bytes/s [sATA/600 = 600,000,000 bytes/s]

* KB = 1000 bytes, KiB = 1024 bytes

 

Sequential Read (Q= 32,T= 1) : 553.888 MB/s

Sequential Write (Q= 32,T= 1) : 531.741 MB/s

Random Read 4KiB (Q= 32,T= 1) : 382.261 MB/s [ 93325.4 IOPS]

Random Write 4KiB (Q= 32,T= 1) : 339.864 MB/s [ 82974.6 IOPS]

Sequential Read (T= 1) : 535.780 MB/s

Sequential Write (T= 1) : 498.315 MB/s

Random Read 4KiB (Q= 1,T= 1) : 44.218 MB/s [ 10795.4 IOPS]

Random Write 4KiB (Q= 1,T= 1) : 124.402 MB/s [ 30371.6 IOPS]

 

Test : 1024 MiB [s: 0.0% (0.2/931.4 GiB)] (x5) [interval=5 sec]

Date : 2018/02/27 18:46:37

OS : Windows 10 Professional [10.0 Build 16299] (x64)

 

 

Z:\ Samsung 970 pro M.2 (on PCIe adapter)

-----------------------------------------------------------------------

CrystalDiskMark 5.5.0 x64 © 2007-2017 hiyohiyo

Crystal Dew World : http://crystalmark.info/

-----------------------------------------------------------------------

* MB/s = 1,000,000 bytes/s [sATA/600 = 600,000,000 bytes/s]

* KB = 1000 bytes, KiB = 1024 bytes

 

Sequential Read (Q= 32,T= 1) : 3354.888 MB/s

Sequential Write (Q= 32,T= 1) : 2713.706 MB/s

Random Read 4KiB (Q= 32,T= 1) : 329.306 MB/s [ 80397.0 IOPS]

Random Write 4KiB (Q= 32,T= 1) : 240.080 MB/s [ 58613.3 IOPS]

Sequential Read (T= 1) : 2719.699 MB/s

Sequential Write (T= 1) : 2653.193 MB/s

Random Read 4KiB (Q= 1,T= 1) : 49.162 MB/s [ 12002.4 IOPS]

Random Write 4KiB (Q= 1,T= 1) : 123.070 MB/s [ 30046.4 IOPS]

 

Test : 1024 MiB [Z: 0.0% (0.2/953.9 GiB)] (x5) [interval=5 sec]

Date : 2019/03/25 18:38:24

OS : Windows 10 Professional [10.0 Build 17134] (x64)


Edited by peachpossum
added crystal benchmarks

Windows 10 Pro 64-bit
i9-9900k 3.6 GHz, 64 GB 3200 MHz ram
ASUS ROG Strix Z390-E Gaming
RTX 4090, Samsung 970 pro NVMe

Link to comment
Share on other sites

This is how my motherboard manual describes the M.2 slot where I had my S:\ 860 evo installed

 

--snippage--

 

Couple of things. 1) When you fire up DCS, there is connection that is made to ED server (for the new licensing) and it almost always takes 30 seconds to finish if memory serves. So there is a built-in bottlneck on start of DCS.

 

Second, I moved the "SAVED GAMES" directly when I was testing by using a symbolic link. So the SAVED GAMES on the SSD was symbolic linked to the NVMe drive. But again, saw very little differences when I tested.

hsb

HW Spec in Spoiler

---

 

i7-10700K Direct-To-Die/OC'ed to 5.1GHz, MSI Z490 MB, 32GB DDR4 3200MHz, EVGA 2080 Ti FTW3, NVMe+SSD, Win 10 x64 Pro, MFG, Warthog, TM MFDs, Komodo Huey set, Rverbe G1

 

Link to comment
Share on other sites

I would like to see this tested with a proper board and proper chipset that has AT LEAST 4 lanes dedicated for the NVMe, like AMD does.

 

Testing this on a Z170/270/370/390 is tainted as the 4 lanes are NOT ONLY for the NVMe but also FOR ALL OTHER PERIPHERALS connected to your rig but the GPU.

 

Those kindergarden boards have more limitations than proper features.

Gigabyte Aorus X570S Master - Ryzen 5900X - Gskill 64GB 3200/CL14@3600/CL14 - Asus 1080ti EK-waterblock - 4x Samsung 980Pro 1TB - 1x Samsung 870 Evo 1TB - 1x SanDisc 120GB SSD - Heatkiller IV - MoRa3-360LT@9x120mm Noctua F12 - Corsair AXi-1200 - TiR5-Pro - Warthog Hotas - Saitek Combat Pedals - Asus PG278Q 27" QHD Gsync 144Hz - Corsair K70 RGB Pro - Win11 Pro/Linux - Phanteks Evolv-X 

Link to comment
Share on other sites

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...