Jump to content

blkspade

Members
  • Posts

    1224
  • Joined

  • Last visited

1 Follower

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I suppose some people have been flying so long that they can't acknowledge that some others are new people are unfamiliar with "the old ways". You would simply request a Bogey Dope from the AWACS while tuned to their radio frequency. The AWACS will respond with the closest enemy contact it can see to you, with a BRAA. If the bearing matches what you see on the TEWS , respond accordingly. Ideally other people will be on comms attempting to alert team mates of close threats. Depending on your mission and the estimated range of the contact and type on your 6, you have the option of ignoring or attempting to outrun it. Just because you got the ping doesn't absolutely mean they even see or intend to engage you specifically. As a long time F-15C main, I'm largely unaffected by the lack of datalink. But I also use the E as if its the C so I'm always flying CAP. So any ping I suspect is enemy I'm turning to. That said if you are attempting a strike in MP where opposition is likely, have a wingman, or better an escort. Being on comms gives you the option of having many others that can check your 6.
  2. See I don't get why you have to do that when I make a response, just say "its useless garbage", and skew what it is that I'm actually saying. I'm making references and providing context. I never said RAID is never used by individuals. Hell I run a truenas array at home that is mostly ever accessed by only me. Way back before SSDs existed I was one of the guys running RAID 0 trying to get faster performance on my desktop, eschewing the redundancy. Lose one drive, lose the entire resource. Almost bought a couple Velociraptors. Plenty of us computer geeks likely did, but it was somewhat atypical on the whole. When I said (typically) shared, that would indicate I don't mean "always". Just like I was never claiming to know more than the inventor. I was saying that people at that level don't always have consistent messaging on a topic, because the relevance and context switches depending on who or what group they may be talking to. Now I would like to use a separate example from a different Intel employee, just to point out how something can be miscommunicated, but not wrong due to context. I'll refrain though, as I see its not worth the effort.
  3. I get it now, its context that you really aren't grasping. You misinterpreted my point about RAID, but it was specifically in relation to your previous statements. In a shared resource, contention would be inevitable, but you'd ideally have built-in the headroom to mitigate it. Obviously demand could grow beyond the headroom. A bigger faster array obviously gives you more headroom, but an array while potentially giant is still a (typically) shared resource that could still eventually reach a point of contention. It's many drives, but the data is spanned across them, so they are effectively doing the same work simultaneously. The load is just distributed across them all. Each drive isn't handling a separate dedicated task, application, or user. RAID was a poor example for you to use in that context. I hope that clarifies that for you. Which goes back to my point that a single decent NVME drive has so much headroom built-in, most casual single user computer use isn't going to cause the contention that would genuinely warrant expansion. The act of actually reasonably taxing just one, shifts the point of contention to the CPU. To go back to a PS5 example, (for context) it relies on heavy compression, so it has dedicated silicon separate from the CPU to handle decompression because its little Zen 2 based 8C/16t CPU isn't up to the task, in conjunction with other system responsibilities and running the game. That's according to Mark Cerny, Lead System Architect of the PS5. If you add-in a NVME it has to be Gen 4 capable of 5GBps. As you know those data center NVME arrays require lots of cores/threads just to shuttle the data around. Work to be done on/with that data is commonly on a completely separate set of servers/CPUs. Most home/office computers are 8 cores or less, and not even the latest gen. So their CPUs are doing the shuttling and the actual work at the same time. A lot of them are still too slow to move or process data at speeds above what SATA offers. Any respectable gaming PC should, but only recent high end ones can push more than what a Gen 3 drive offers. A steam game file verification will use up to 8 of my 32 threads to only read the files at up to 630MBps, on a drive that benches 7000. This is obviously a built-in software limitation. At some point in the last 24hrs hwinfo on my PC recorded reads and writes at 4.3/5.4GBps respectively, but max CPU utilization of 70%. The disk throughput is likely from 1 of the 2 games I played in that period, and most don't push anywhere near that. The write I could probably attribute to a 6hr MP DCS track file. The game sessions alone obviously aren't the source of 22 threads worth of utilization though. Only because I wasn't playing Star Citizen. So that's most likely tied to the high write source too. My computer is overbuilt, but isn't just for gaming. That's just the most intensive use it had this day. There are still computers sold within the last 5 years that hit 100% CPU utilization just doing an Internet speed test. I won't say that most are that bad, but cheap computers are very ubiquitous.
  4. I was making an effort to be concise but thorough. I wanted to give you the benefit of the doubt about you happening to misinterpret me somewhat abbreviating my points, as that wasn't exactly what I was saying about RAID. However your subtle (no so) attempt to just attack me seems to suggest you're just doing it on purpose. Amber Huffman wasn't exactly the sole entity that was developing NVME, while at Intel. It was a consortium of companies that had their hands in co-developing the spec. I don't think it's a secret that Intel's most important segment is data center. https://nvmexpress.org/why-nvm-express-in-3-minutes/ , https://unpacked.network/shows/storage-unpacked/206-nvme-2-point-0/ . Client is a part of the conversation for sure, every segment is, but the clues do point to Data Center being the primary driver. I was really having trouble grasping how you interpret that statement at that time as an absolute, when Data center is so clearly favored. Saying "The inventor said it", seems a bit of a cop out when those words are running counter to reality. Taking her words along with the actual execution. One could reasonably reconcile the two as her looking forward to the future. She is an inventor after all. In that moment in 2012 there were no client systems that would be able to take advantage of NVME, and there wouldn't be for many years to come. She would obviously have some ideas as to how consumer class tech would continue to evolve, being at the forefront of it. I could see how it could maybe be open to interpretation if you can ignore that it was used in enterprise first along with it's costs. It's 10 years after the 1st NVME devices shipped and we're finally getting client software solutions starting to tap into their true potential. Which still needs the latest top end client CPUs to do so, that move the least units at laughable lower margins than server chips. Fine I'll concede that this must be what it was all about. Data center was a total afterthought. You get your Gen 5 NVME drive yet? I haven't but I think I'll buy 2. Maybe throw in some Optanes for good measure. I'm gonna have all the frames. FPS=Yes!
  5. My argument is that it clearly played out differently than what was claimed, as the execution was there to be witnessed. It obviously would benefit client eventually, but it continues to be far better than most can make use of. The first NVME drives were announced specifically for enterprise. That was both where they were most needed, applicable. Client CPUs were in no way ready to handle them at the time. You are quoting a person from Intel (somewhat out of context), and taking that at face value when evidence of what they did with it proves otherwise. Its literally actions speaking louder than words. Intel has said and done so many disingenuous things, that anyone paying attention to tech long enough would pick up on. It's not wise to ignore the actual actions of major corporations in general. That goes beyond our little debate. I'm not using technical terms out of nowhere, as its about providing context. How could one with an understanding of tech knowing what benefits are provided and come to the conclusion that it is in reality a consumer oriented product? That "technical jargon" defines what benefit said tech provides in relation to exactly where it could be most immediately applied. Most bleeding edge high performance tech is in fact for enterprise first and filters down to consumer. How can her statement be true when no consumer class systems existed that needed or could even handle a single device with equivalent performance to a small array of HDDs? Not for like another 6 years after enterprise had been using them? Are you really not seeing the logical failing there? Even today the fastest consumer CPU (7950x) at general file decompression tops out at ~2.9GBps. I use that particular example because most client needs for grabbing data from storage aren't just dumping it to memory or a wire, but actually doing some level of work with it. Not everyone is sitting on a 16 core CPU. The first version of Direct Storage even still had the CPU doing the texture decompression. Just to go back around to NVME storage not really being the bottleneck in consumer applications. DCS is still handling IO with its own coding and not the DS API.
  6. Keep up with what exactly? If we remove gaming, as the bulk of PC space isn't that, most end users don't do things that actually benefit from speeds faster than SATA. The random QD1-1T performance of most NVME SSDs is still below theoretical max of SATA. Queue depth 32 1 thread gets slightly above SATA capability (+150-250MBps). That is representative of how most client software behaves and has so for as long as SSDs have been an option. For gaming, Direct Storage comparisons exists and even they barely show a difference between NVME and SATA SSDs. So what she is describing (in 2012) could only be for extremely highly demanding situations or edge cases, and not the market as a whole. Not everything engineers gets excited about is as big as they make it sound at the ground level, because it takes forever for software to catch up. You didn't grasp context of her statement, and ignored that everyone else speaking were enterprise pros. SATA wasn't enough for the potential of NAND flash. See you clearly looked up this clip in an effort to prove me wrong. I actually understand what's being said. They are talking about high parallelism in 2012 when the highest end client CPUs were quad cores. NVME wasn't even really an option in client computers until 2015 (which still topped out at 4c/8t CPUs for consumers). There were some M.2 PCIe based SSDs that predate the NVME protocol. I had a Z97 board from 2014 with M.2 slot that wouldn't even know what to do with NVME. See you'd want to grow interest in things like this in the consumer space because it leads to mass production and cheaper prices. So many low-mid range end-user OEM computers between the inception of NVME until maybe 2 years ago still had HDDs as standard. You upgrade any one of those with less than 12 threads to an NVME SSD on a clean clean install, the CPU is pegged at 80-100% just setting it up and running Windows updates. So that argument doesn't add up with anyone that's been hands on with thousand of client computers. NVME drive costs only really started coming down when the real lowest common denominator needed them, Consoles. The PS5 having a user accessible M.2 slot has been the best thing for PC users with high-end desires. So late 2020 created the use case for the drives to start becoming more cost effective to be considered in lower end computers, which will have the knock on effect of more client software being written to properly address them. Yeah I guess your 2012 clip makes all the difference.
  7. U.2 is NVME on a different connector currently used for servers. The increase in IOPS is what promotes muti-client throughput, which is also something you get from RAID. The benefits of these technologies just flows from one space to another. It shouldn't need to be explicitly stated for you to gather.
  8. A RAID volume is still a single resource, so that doesn't actually promote your point. It's literally centralizing storage intended to be accessed by many clients simultaneously, along with potentially providing redundancy. All those drives are operating as one, not being dedicated to different tasks. SSDs have boosted the throughput well beyond most single user needs. A single gen 3 NVME has the equivalent throughput of like a 12-16 HDD array, but with near instantaneous access time. The only tangible thing you're getting with the $50 is additional storage. The Direct Storage console point still does nothing for you, as those are still single drive systems. You are conflating the idea of contention with optimization. A single NVME has untapped potential, which is what Direct Storage is designed to unlock. Every NVME is marketed on it's sequential performance, but it's practically a lie because neither the data nor the software has typical access patterns that work that way. The random access performance is the real performance, because most software is written with HDDs in mind as its the lowest common denominator. Microsoft is changing that with Direct Storage for games and making it so being Windows certified in the future will require a SSD for basically all other software.
  9. Direct Storage doesn't fix a supposed contention problem, it fixes a utilization problem. Which I mentioned SSDs being underutilized. Most software, and especially games have been relying on a legacy method of disk access, that just isn't optimized for the increasing potential of SSDs. Devs have been having to hand code ways to improve IO, but DS is providing an API to remove their need to do so. NVME was mainly a server targeted technology, where hundreds/thousands of people could be making requests of the system. No amount of mostly idle single user usage is putting that much load on one that it's making the difference in the gaming experience. The only time there would ever be contention from multiple processes is when they're moving large chunks of data in to memory. Most of what's in the background on any single user system is already working from memory, but will hit a couple CPU threads from time to time. SSDs in general are already so much better than HDDs because there is no seek time, the minute things hitting storage can take place so fast. I've only worked in IT/Systems admin for 20 years. I've upgraded a number of low core count devices from HDDs, just to have the CPU utilization go through the roof as Windows goes to maximize SSD throughput.
  10. My point is that anything putting enough of a real load on your NVME drive to be noticed, is pegging other resources beyond the drive itself. NVME SSD's are woefully underutilized in most consumer applications, and DCS itself just isn't so IO demanding (on a SSD) that a random log file being written disrupts the experience. That's just not where the bottleneck is for DCS. There is always a bottleneck to some degree somewhere, you just mitigate those that would have the worst effects. I was talking about egregious paging, where you're obviously operating more from disk than RAM when you aren't running anything intensive. If you load a hungry enough app (DCS), Windows almost always wants to shift things to swap. 32GB of RAM is barely enough for some maps and missions. Syria still has a memory leak. Having 64GB of RAM ends up being a big improvement for DCS, but in part because the software is a bit broken. The single threaded exe could use 8 threads if you pan the external view really fast because it will use more CPU to address all the IO requests for the terrain from the SSD, but that level of use is not what's happening in regular normal gameplay. Outside of VR you can get north of 120FPS in a flight sim, and it's not getting better in meaningful way just because you put it on it's own drive.
  11. If you've got a decent NVME drive, the IOPS it will be capable of should make it relatively irrelevant that games are on the same drive as the OS. Having north of 500K input/output operations per second, makes them intended to do many things at once. Provided, you have the RAM to prevent endless paging, the absolute busiest your OS drive will be is during malware scans or updates. Outside of those taking place the background activity is going to be imperceptible on game performance. Those things actually hit the CPU harder than anything. Getting max NVME performance actually demands many CPU threads be engaged, and those things will use 6-8 threads alone. The IO of DCS can also use just as many threads.
  12. 36FPS is unacceptable to me. I felt the drop from 90/45 to 80/40 from the Rift to the Rift-S.
  13. The 4070ti Isn't even in the same tier as the 7900XTX. If they are close to the same price where you are, XTX makes way more sense.
  14. It's mostly just people being more comfortable with the brand, and probably some mild elitism. I use the 7900 XTX myself, in VR. Besides VR, I have an Ultrawide 1440P screen, and it gets more frames than I could possible ever need in DCS on a monitor. I never really fly outside of VR though. It's the better product for the money than the 4080 IMO. If you're not completely enamored with RT in other games anyway.
  15. It should be noted that the suggested power supply requirement is with a lot of buffer space, to accommodate for unknown variables. It's more useful to look at the power consumption under load of what you have and intend to add. The RX6400 only uses 53w at load, and a PCIe slot is required to provide up to 75W. The i5-9600 non-K will be closer to the 65W TDP in such a system than a K part, especially in gaming loads. The K model is 120w at full (synthetic) load for example. A SSD won't be more than 7-14W. You'd likely be looking at around 70-80% power capacity at worst, while gaming. It's doable.
×
×
  • Create New...