Jump to content

blkspade

Members
  • Posts

    1184
  • Joined

  • Last visited

  1. Linus included MSFS performance with his 5800X3D benches. Might have some relevance here.
  2. There was some collaboration between Nvidia and Micron in the development of GDDR6X, and reportedly Nvidia bought out all of it. There probably wasn't going to be much choice anyway other then super expensive HBM2E. Nvidia's design underperforms at 1080p, and slightly less so at 1440p compared to how good it ends up at 4k. That memory bandwidth also happens to be beneficial to mining, which Nvidia is directly shipping cards toward.
  3. While it may never matter for DCS, its likely to sooner become an issue for other games. Going forward most other game engines are going to be ramping to an eventual target of 8C/16T of the new consoles. Amid all the bugs you can see how CDPR attempted to design their engine for Cyberpunk 2077, pretty much at the limit of previous gen consoles. One of their bugs had the AMD FX CPUs in mind that effectively disabled SMT on Ryzen. It needs 7 threads on PC for maximum throughput, so AMD 6C or less were greatly hampered. Last gen consoles are 8C8T, where 1C is held for the OS/UI. Now there is a future where cross platform game engines will literally be targeting a mildly downclocked R7 3700X, and on NVME storage. Oh and DCS in its current state does actually hit many threads for IO operations. It's how you maximize throughput from an SSD. We just need them to get the rendering sorted out with Vulkan.
  4. Nvidia's focus seems to be specifically 4K and raytracing. The Nvidia 3000 series cards are somehow tuned for 4k that results in them scaling poorly at lower resolutions compared to AMD 6800 cards. The limited VRAM potentially comes into play with the 3080 with supersampling in VR. Its one of the reasons why I'm not interested in stepping down to 10GB from my 1080ti.
  5. I've watched his as well. Ok. I get that it should be better, on paper. I just feel it should be way better. The problem is that in many actual properly multithreaded applications, its just barely pulling ahead when it manages not to lose. Like I said previously the, 5600X is a lot closer to the i7 in those scenarios, than the i7 is to the 5800x. It does make more sense given the closer pricing in your region I suppose. That still doesn't instill much faith in the gaming transition that will start fully utilizing Zen 2 based consoles with RDNA. Outside of simulators, consoles gets first billing and AMD has positioned itself to start leveraging that advantage. MSFS though will likely need to be heavily optimized for the Xbox SX release. Between Direct Storage (and RTX IO) and Smart Access Memory, PCIe 4 is going to get used for sure. They are clearly features designed specifically to provide symmetry with the incoming Gen of consoles. They are definitely trying to steal free performance optimizations. Quadcore was well past its prime with even the 7700K. Intel just didn't have the incentive to move consumers past it. Hyperthreading and brute force extended it's relevance in gaming. 12 threads will probably carry that 5600X deep in to the life span of the consoles. When my 4790K at 4.7Ghz was on par with the 7700K, I knew I needed at least double the cores to be happy. The 8700K didn't really wow me. GPU transcoding was sub par, handbrake made the computer unusable, and I hosted a plex server. The R7 2700 made the most sense to me. Intel seemingly had only even bothered offering more cores to stay close in productivity workloads. Now their final chip on the platform will only be 8c/16t. Which is also when they'll unlock PCIe 4 on Z490, but maybe it wont be relevant before any other reason to upgrade.
  6. I'm in the US, so prices are a bit different. I'm still confused about where you claim the Intel's are the better. The 5600X manages to outperform even the 10900K in some games, and the whole series is also better in photoshop than Intel's entire line-up. I moved from a 4790K to a R7 2700, with no issues. I eventually gave the 2700 to my kids because the z97 board died (initially thought it was the CPU) that was handed down to them with the 4790K. This is a very in-depth review, and many others mirror it's general results
  7. I'm a bit curious as to what your use case is. The 5600X seems to beat or tie the 10700K in many if not most games, with similar results in many multithreaded workloads. Even where it loses in multithread, its closer to the 10700K than the i7 is to the 5800x. That's for 100watts less power (thus heat), and $180 less with an included cooler. That would absolutely run circles around whatever current 6c/12t you could currently have. With a realistic need for the additional threads outside of gaming, the 5800X pulls further away. The only thing that makes want to move from my 3900X, is that there is a noticeable hit to DCS in VR in a VM compared to native. More so than any other VR game, and a non-issue for flat screen games. With plans to transition away from Windows as my regular desktop OS, it can get its own die for gaming while all my main Linux stuff runs on the other. I want the 5950X almost solely to have the 2 8 core CCDs, but could probably realistically get by with the 5900X. Both would raise the lows above the point of concern for VR.
  8. The AIM-7 and most other SARH missiles will HOJ, and don't actually require to be fired in HOJ mode. Even turning ECM on POST launch will indeed allow it to continue guiding if the host radar drops the lock for any reason. The game logic does seem to limit them to only tracking the actual target they were launched at, with one exception. That one exception may have been a bug during a specific update, but I've had an AIM-7 take a steep intercept angle above the target, miss but loopback and start guiding back at me. Yes it was pulling G right back in to my radar, but just missed me. I was able to repeat it. Might still have the Tacview.
  9. Those 16/20GB cards wouldn't have made any sense in any scenario where capacity wasn't a limiting factor. The only reason the 3080 is 10GB instead of 12 is that the additional vram ICs would have bumped up the bandwidth, and neutralized the small advantage the 3090 actually has. So if AMD ends between really close and beating the 3080, they'll basically have to put out a 12GB 3080/3090. 12*1GB ICs, would outperform 10*2GB ICs, making a 20GB card really stupid. Personally, I refuse to step down in capacity from my current 11GB, just as much as I won't pay over $800 for a GPU. I barely wanted to fork out $700 for this 1080ti.
  10. While I completely agree that the 3090 is more than will likely be needed any time soon for gaming loads, I'm put off of the 3080 only having 10GB with regard to VR. DCS in VR on a Rift S does seem to reach into the 10GB region on my 1080ti, with the PD increased. MSFS2020 at the highest settings at 4k can use 12GB of VRAM . So a 16GB GPU is really what I would want.
  11. In that regard, it would've made more sense for the 2080ti to be $800 as opposed to $1200+. There was only a $50 jump from the 980ti to the 1080ti, but the 980ti was $50 less than the 780ti. So it's not unreasonable to expect a performance jump for the same price between generations. Nvidia needed to make up for over producing Pascal, along with new consoles looming and having no competition. In no way should anyone really consider it sensible to pay 88% more for 35% more performance. Especially considering how quickly these thing become superseded.
  12. The reason for the disparity is that applications can request a given amount of memory prior to actually populating it. So DCS is preallocating all of the remainder of to have more immediate access to space it may need. Task manager is only showing how much DCS is actually using at any given time.
  13. That would be a loaded metric. If most haven't just bought the full FC3 package, the rest are biased against the FC3 F-15 for not being full fidelity.
  14. Rarely after the original introduction, in my experience on our server and a couple others. They bring up the tomcat to chuck phoenixes, then might do A2G when the entire opposition has disconnected (because Air Superiority), or they just do carrier traps.
  15. Yet they are making a new A-10C, for a total of 3 A-10s. We went from slow bomb truck, to faster bomb trucks, to a rehashed slow bomb truck. I've been playing this Sim since LoMAC, but didn't get in to MP until FC2, after a really long absence from the SIM. The SP experience has been way too sub-par to really be their bread and butter. Between lack of content, no dynamic campaign, and poor AI (ground and AI), they really do need to be focusing on MP. Everything about a Combat flight sim thrives on MP. It's not air-greyhound, where actual people wouldn't be incredibly different from drone traffic. I only buy modules in the context of how I can enjoy it with others. I only cared about the hornet as a Western SEAD platform, and the boat. Its nice to have a fully modeled radar, but it doesn't do Eagle things. You can't extend and reset in the Hornet, and can be killed from a 15-20nm rear aspect shot. The Viper makes up for that with better performance, but way less fuel and weapons. Strike Eagles are great, but they still get sent up with Charlie Eagles, because bomb truck. I'm otherwise on the fence when it comes to Razbam. MP needs real fighters for CAP, with all the bells and whistles. An air superiority platform for a company that focuses more on SP, would only serve to highlight deficiencies in the AIR AI suppose.
×
×
  • Create New...