Jump to content

BitMaster

Members
  • Posts

    7770
  • Joined

  • Last visited

  • Days Won

    4

Everything posted by BitMaster

  1. There won't be 100% compatibility, that is the only thing that is 100% certain. With the right HW you can get pretty damn close to Fire&Forget when it comes to basic stuff, CPU, Chipset, Audio, LAN, WLAN, BT. GPU is usually not a big issue but sometimes you have to fiddle with the version of Nvidia driver you are using to get it working ( I am really happy to have an iGPU just for this reason ). For gaming devices, it all depends if the souces are open and if the Community can build a driver, or not.
  2. The trend towards 64GB in DCS is slowly arriving with Syria and the newer modules. If I was you and if you can afford it, go 64GB right away. It may be very tricky or impossible to add another 32GB a year later. I would personally prefer an AMD these days but the above mentioned HW is fine as well. We have choices meanwhile.
  3. I have no idea what language he's talking in that video.
  4. As far as I could read it up, that version of Windows never made it to release. I am honestly not a big fan of messing too much with a Windows install. The more functions and apps you have installed the more dependencies one has and it then takes only little things to break big things. I would consider that without question with one of my VM Win10's. Provide me a link and I will happily spend some time with it. It's actually not about OS stability foremost, that is only a nice extra, the big thing is graphics power combined with very high IPC and Multi-Core performance as well, all that with much less wattage/heat and very likely for a fraction of the cost. I mean, right now, you can get a very capable MBP 16" with some reall extras on top for the price of a 3090. With a sober mind and hard earned cash you will think twice if alternatives arise. I am confident DCS won't move to macOS-ARM before I die, that's not my topic. It is more or less the whole Home Computing market shifting towards ARM in general IF Apple can deliver as expected with M1X, M1 is already a big hitter even so it's a first. For 90% of the people I know ( and fix their little PC issues ) a MacBook Air M1 would be a blessing in every aspect. It roughly pulls equal with a R5 5600X at a fraction of the energy and cooling needed. Now imagine Apple manages to really succeed with the M1X across a variety of their Computers and mobile devices as well. It's the same damn die all over, they wont need 5 different dies and sockets etc... they need just ONE. Yield will be much higher, cost significantly lower and what the rumors say, tack a few of them together and you get a 40-core CPU. That will hunt in Threadripper territory, ala Mac Pro or high end iMac class of devices which usually have high core count WS CPU's. If Apple manages to use one and the same core across most of their devices it will be a winning strategy and others, if they like it or not, will have to do ""something"" about it. I think, the wrong answer would be to follow the x86 road for much longer. imho, the future is ARM, multiple cores with high performance vs. high efficiency ( 8+2; 16+4, 32+8 etc.. ) combined with a multicore iGPU, all tied to the same nearby DRAM. Things would need to change, there would be no more empty mainboard to buy, likely they would come as a Board+CPU/GPU+RAM is what I could imagine. As it already is nowadays with highly energy efficient devices, sockets and pins are a thing of the past in this regard. With the Green Idea behind, Net-Zero-America or the equivalent movement in Europe, tightened regulations, all that points to the same direction. In contrast, Intel's latest Alder-Lake-S insanity with ~230w TDP for an 8-core feels like the "Processic Park II, The Revival of the Dead", a real hot movie if you ask me. Whatever will come out of it, it will affect the CPU landscape significantly and thus, through the backdoor, force Software Companies to again take care for ARM compatibility. It's nothing new, it's already done for billions of ARM based mobile devices, likely outnumbering x86 devices already.
  5. Don't beat me now, LoL looking at the assumed numbers for the coming successor of the M1 chip, M1X, it looks like we are looking at a paradigm shift happening silently. When that new MBP with a 40coreCPU/32core GPU will play at 4k+, incl. RT, at a MUCH lower TDP and also MUCH MUCH lower price I could see that happening. Actually, I was planning for an AMD Ryzen and maybe a MacBook Air M1, hey, it looks like I will stick to my rig and wait what the new MBPro offers. When it comes to quality, I can only say my Mid2012 MBPretina still runs great, never had an issue in almost a decade, never ever it let me down or wouldnt wake from sleep, etc etc etc... I could list a ton of stuff that my Windows machines sometimes do and what the Mac never did. That alone makes me want Apple and pick Windows only if there is no other way. In that time I own the MBP I went through more than a hanfdull of gaming rigs, 2700k, 6700k, 7700k, 8700k, 980GTX, 1080ti, etc. ..and each one of those gave me more grey hair I am really looking forward for this. It can't harm to stirr the gaming hardware market up a bit. It will take time to adopt to ARM but I think the age of x86 is coming to an end in the next 5 years.
  6. I cannot deny the truth above, LoL. So far it runs ok in a virtual machine with nothing to do. On a workhorse machine it may look totally different.
  7. Depending on your CPU cooler you can get some better frametimes if you overclock the CPU towards 5GHz, which many of the 8700k are capable of. As a guideline, the upper Volt limit should be 1.35ish Volts for the CPU and when you do stresstests, have an eye on the watts ( HWinfo ), you can go well north of 200w and the heat will spike like mad. A good start could be MCE, MultiCoreEnhancement, which will likely put all cores to a static 4.7G at around 1.35ish Volts. You can use that ( activate in Bios ) to get a feeling how your CPU clocks. It's a very time intense thing if you are new to oc. Take your time, watch YT vids and dont hesitate to ask.
  8. I wouldnt buy new RAM either unless really forced to. Enjoy
  9. Exactly ! I will not pay 2-3k€ to get a great GPU. Actually, I just got a call today from one of my nephews asking if he could upgrade his GPU etc... Guess what I told him. Stick to your GPU and if it all goes south, get a new console. This situation ruins a lot, the damage just doesn't show up yet but it will, I am pretty sure there will be a measurable shift towards consoles, away from PC if the prices dont come down for everyday people with normal budgets and family responsibility. I just cant pay 3k€ and tell my kids its gonna be Noodles and Ketchup for the next 12 weeks.
  10. If you have the KEY it should work but many PC's I services didnt have a key sticker, or didnt have it anymore. That will work and activate as long as you keep the same HW as MS ties that HW checksum to the KEY your Win10 uses, you dont need any MS account for that. Just when you change HW the hassle starts... Things might work differently if you live in different countries with differing laws etc. I can only speak about Germany and partly about US owned machines operated in Germany ( like a Nato soldiers private PC, which would actually fall under SOFA agreement iirc... now it gets complicated..and I am a PC guy and no lawyer
  11. Good move with the 5600X ! I dont think it will dissapoint you. It runs circles around my enthusiastically cooled 5GHz 8700k even w/o PBO2 engaged and with the stock AMD cooler. It should give you a good boost forward. For the RAM: Once you can run them stable at 3200/CL16 without any manual OC otherwise you can try to up the Volts to say like 1.385-1.40v and lower the CL to 14. It's a trial and error and may not be worth it but it's a good way to waste dozens of hours and maybe you can get them to 3200-14-14-14-34-1T @ 1.38-1.40 Volts That's what I would do if they run XMP/DOCP just fine. If they don't, you have to manually find out the correct settings or buy new RAM. I am also keeping my RAM, 3600-16-16-16-36-2T 32GB/4x8 for my planned 5900X. Only if they dont run properly I will buy new RAM...and if I have to buy I will go 64GB, likely 2x32GB if I can get those with the timings I want. RAM can be really tricky if it wont work out of the box as intended.
  12. IIRC the 5800x runs a bit hotter than the 5900x despite fewer cores. If you want to exploit PBO2 you do need a serious cooler. I am actually aiming for the 5900X myself, just waiting for the "S" boards to arrive, I really want to avoid that Southbridge Fan.
  13. FYI: For a DCS Simmer or PC enthusiast in general it may pay back to not only activate Win10 online ( which you must do obviously ) but also to register that license to your MS account for a very simple reason: If you do NOT do that, you can easily reinstall Windows10 on this same rig and it will activate again ( again only online, which is actually not my topic here ) BUT if you change MB, CPU or many other components and your Win10 decides to deactivate itslef you are locked out of your license. Only if you have registered that lic to your account you can tell MS "I have changed my hardware..." with a button, log in and choose the previous PC-lic you want to use to activate again. Saves real money and headache. You dont have to use that online account if you dont like it afterwards, I dont use it either. Create a new admin account and wipe that online account if you wish but secure your license across hw changes. I did activate TPM2.0 on my 8700k through Asus Bios and it worked, I can also use that function inside VMware now and have 2 Win11 installed, Beta and Dev editions in VMware so far. Mind you, for VMware, I upgraded two Win10Pro installs, it did not need that TPM function. I enabled it a day after I installed Win11. At least for now and in VMware you dont need a TPM enabled. Might be needed if install from scratch or in later editions, I simply don't know. What I know is that the 8700k has a built-in TPM2.0 function via Firmware.
  14. I can answer your last question: Many use MSI Afterburner, which you can download from Guru3d.com. It has a configurable OSD via the included Riva Tuner Statistics Server RTSS. YOu can, if needed, also route your HWinfo values into that OSD and literally show each and every aspect HWinfo is capable of on your OSD inside DCS. It has many features, the OSD is only 1 part of it. You can tune and overclock/undervolt your GPU with it too... it's worth spending some time with it to know how to work with it, not only for DCS. The first question is really hard to answer: The "standard" benchmark....ehhh...there is the problem. With so many updates in DCS, coupled with new GPU drivers and Windows Builds...it is almost impossible to create a benchmark that has a halftime longer than 14 days tbh. Some have done real nice graphs with lots of effort, but that was before 2.7, before 2.x etc.., so they have become mroe or less useless for present setups & versions. The best benchmark imho is your own judgement. Does it stutter ? NO = Good, does it have enough fps to satisfy me ? Yes = Good. If you are on the other side of those 2 answers then you may need to tinker with the LOD, OC etc... and if that all doesnt bring relief you may need to upgrade hardware
  15. Don't panic yet ! It's beta time and beta conditions too ( TPM2.0 only for now in beta, Release will work with 1.2 as well ) and most boards do have at least TPM 1.2 in Firmware, so no need to add a TPM module iirc. If they really would exclude R5-1600x, 7700k etc.. from Win11...well, then MS really screwed it up this time. Acceptance would be a lot lower if that is the case upon release. No, I cannot believe they will do it that rough.
  16. Raid-5... I never liked that specific type, it's not bad but imho if you can go Raid-6 then do that. Not seldomly multiple drives fail in an array in relative short time, rebuild speed etc.. Global Hot Spare(s) etc... Raid-6 for critical stuff, or Raid1or10 for OS ...and Raid-0 for pure speed Gaming etc. ala F... Data Security LoL W
  17. This is partially true. Smaller drives, usually the smallest or the 2 smallest drives out of a series have too few Dies to have one connected to each channel of the controller and thus much of the performance cannot be leveraged on those i.e. 4 out of 8 channels. The parts, controller and storage dies are as fast as on the big TB drives, just fewer of them and that hurts parallel I/O performance. But to be honest at this specific capacity and price, those are usually the most expensive per GB as well. When you look at the 980 Pro, the 256GB is very expensive compared to the 500GB and 1TB model.
  18. I run DCS from a tripple Samsung 850/860 Pro 256GB Raid-0, read/write is 1.5GB/sec either way but access time and Random IO is that of a single drive, those numbers stay at the level of a single drive, usually a tiny bit less even. The reason why I raided them was to combine the volume and tzhe reason it's Pro over Evo is that I run a few VMs from that Raid-0 which is write heavy. I dont think it makes a noticable difference. But if I had to buy again I would definitely go NVMe 1-2TB Samsung 970 Evo Plus or 980 Pro. The 980 Pro was at an all time low yesterday btw for 329€ 2TB or 140€ 1TB at amazon. NVMe is so much sleeker, no cables, no nothing, a much cleaner PC with mainly NVMe over my 6 drives + 1 NVMe, wished it was 3 x NVMe and done.
  19. Made some screenshots, SP, P-47, Syria, Instant Action Free Flight, dead simple, look at the RAM before I even hit fly:
  20. This is not actually what's going on. Install MSI Afterburner OSD and watch your RAM/Pagefile there. Syria is using a good part of those 32GB. In task Manager you have a tainted, misleading view.
  21. If you build new, consider 64GB if it fits the bill. I bought Syria recently and suddenly VMware is not the only app I run that eats through those 32GB.
  22. They become items of desire and wet dreams, maybe we should sell Posters of 6800XT's and 3080ti's so one can put them up next to the Lamborghini Countach Poster. Rarely seen in real life, way too expensive for casual people with real lives, just like that Lambo
  23. To actually get back to your original question: In the majority of cases the default 3.6-4.6 will be the faster solution. The system will allow all cores to go to 4.2-4.4G as long as the temp stays ok. In that state, your 5600X will draw max 73-75w when fully stressed. When you employ TPU with it's 2 settings, you basically allows Asus' AI chip to Auto-OC your rig. You said overclocking wasn't your primary goal for now ( which I understand, run properly before you fiddle with it ). TPU on Asus-Intel is not my pick, I have a few Asus Intel boards under my regime and I dont even bother TPU, it's too brutal in Volts. With your intention in mind not to overclock too much at first, I think PBO is the much better option vs. Asus TPU settings. To find out which one solution produces which results, try it out and run HWinfo with full Sensors, you can see the Wattage difference under the same load ( Cine R23 for example ) and how much temp that causes. I can tell you right away, the included Cooler does not really allow PBO. It works for a R23 run but gets borderline HOT. If you use the included fan, leave it ALL OFF if you intend to run things that really use all cores for a serious amount of time ( 90°C for 90minutes ain't cool ). I am almost sure that TPU 1 ( 2 for sure will ) will exceed the PBO wattage and thus temps. Use HWinfo and check yourself how much the cooling can take in your scenario. The included fan may do DCS with PBO just fine, but not an all core brutal Handbrake session for ours
×
×
  • Create New...