Jump to content

LucShep

Members
  • Posts

    1698
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by LucShep

  1. As said above, you can see those listed in the column "Application name [claimed]" (click in it, to sort the order for easier assortment). As for those Windows OS processes.... Winlogon.exe This is a critical part of the login process and needs to remain running in the background. It has special hooks into the system and watches to see if you press Ctrl+Alt+Delete. It also ensures you're signing in on a secure desktop where other programs can't monitor the password you're typing. And it also monitors your keyboard+mouse activity and is responsible for locking your PC after a period of inactivity. .....your call, but the only thing I'd ever do to this process is change its CPU priority to "Below normal", and nothing else. Otherwise, leave as it is in Process Lasso. Wininit.exe This is a Windows Start-Up application (Windows Initialization) and is used by many programs to perform an action while the computer is still booting. When you boot your computer, wininit.exe is created by the smss.exe, which will then create lsass.exe (Local Security Authority Subsystem), services.exe (Services Controller Manager), and lsm.exe (Local Session Manager). It creates Winlogon, Winsta0, and the temp folder. It is one of the essential processes of your system and it should not be stopped or messed with. .....your call, but better not mess with it at all and leave as it is in Process Lasso. WmiPrvSE.exe This the WMI Provider Host process, part of what's known as the Windows Management Instrumentation (WMI), and therefore an important of the Windows operating system itself, so should be left alone. It often runs in the background, and allows other applications on your computer to request information about your system. It may occasionally use some CPU when another piece of software or script on your PC asks for information via WMI, and that's normal. High CPU usage is likely just a sign that another application is requesting data via WMI. If you have a problem with it, you need to identify the process that's causing the WMI Provider Host to use so much CPU, and update, remove, or disable that process instead. .....your call, but better not mess with it at all and leave as it is in Process Lasso. Svchost.exe The purpose for svchost.exe is to, as the name would imply, host services. Windows uses it to group services that need access to the same DLLs to run in one process, helping to reduce their demand for system resources. This requires RAM and CPU power to run, so it’s normal to see the increased usage of svchost.exe, mainly when one of the services using Service Host is being used. It's also normal to see more than one instance of it. If Service Host is slowing down your PC it may be because of Windows Update, and in this case you can stop downloading/installing updates, or disable the service entirely (there is a 3rd party freeware tool made for this). Or maybe Disk Defragmenter is defragmenting your hard drive (if you use HDDs), in which case Service Host will use more memory for that task. .....your call, but better not mess with it at all and leave as it is in Process Lasso.
  2. Whatever you do, make sure you can revert back to whatever/however it was, if things become "not so good". You can export your current configuration (as backup, which you can import back). And/or you can create a different profile to mess around. You can do that, in Process Lasso, by clicking "File" (at the top) and access such options. The thing with Process Lasso and operating system processes is that its creator(s) already has/have done all the required work on those for us, or at least that's my conclusion. I remember this same discussion when it came out (early 2010s?) and were testing exactly what you're thinking there. Back then it was with Intel Xeon and HEDT processors with 6c/12t (most games then only used one or two cores), and we were allocating processes to cores/threads not used by the games, with these then set on cores freed from such work of background tasks. So, basically the very same concept we're talking here, difference being that the E-Cores work at a fraction of energy consumption, ideal for this "carry the burden" stuff. There were imediate improvements in the gaming experience when it was done for 3rd party apps (like already mentioned). However, it was also imediately noticed that many system processes are scheduled in ways that are better left as they were - it seems the creator(s) of Windows, and also the creator(s) of Process Lasso, seem to have all the investigation and testing done right, and got those at their best already. Changing affinity, or priority, in operating system processes would impact overall performance (and still does when I tested it again in my 12700K). It needs (and its processes) to be given access to the best performance, how, when, and if, the OS scheduler requires it, to make everything tip-top. And that makes sense - the operating system processes are the basis for everything and whatever is going on, when you're using your computer.
  3. I mean every process that is considered part of the operating system. You can see those listed in the column "Application name [claimed]" (and you can sort the order for easier assortment). Those that imediately pop in my mind are all those labelled there with "Microsoft Windows Operating System" and "Operating System Microsoft Windows". And the Windows directly related services as well (ok, one may argue that these last ones can be somewhat tweaked, but better not mess with those). All of these are better left for Process Lasso itself. If the intention was to tweak those, perhaps better suggest starting with this first: https://www.oo-software.com/en/shutup10 Also, the GPU related ones (NVIDIA or AMD), and INTEL related ones such as as "Management Engine" and "Dynamic Application", are better also left for Process Lasso itself. If you have 3rd parties processes in the background stealing resources that could instead be "carried" solely by the E-Cores (as previously said, and f.ex, Discord, HWINFO, Afterburner, RivaTuner, the controllers and peripherals software, AntiVirus, Firewall, etc, etc) it's a good idea to try so, as it'll free the P-Cores of any hiccups from those. Smooth gaming, free of stuttering. PS: almost forgot, this is also a good complement to Process Lasso: https://bitsum.com/parkcontrol/
  4. Pardon for the long post, but I hope this silly dissertation on the matter can help somehow. Process Lasso is indeed a must have for Intel 12th, 13th and 14th gen processors. But, so is the understanding of the P-Cores and E-Cores duality. I would not recommend disabling neither Hyper-Threading or E-Cores, because there's a lot of useful extra performance in those (direct and indirect, I'll explain next why, IMO), and Process Lasso can also play a role here. Those four puny E-Cores in your i7 12700K (I also have one, excelent CPU) are far weaker than the eight P-Cores, matter of fact, but they are pretty comparable to an i5 6600K. There's been a lot of misunderstanding in the usefulness of the P-Cores and E-cores, especially with these latter ones "what they're good for". The E-Cores in a gaming PC are to be almost perceived as a "second processor" assisting the "main processor" (the P-Cores), many haven't understood that yet. You can have direct benefit in non-gaming apps (for actual work or hobby), by adding the E-Cores smaller power and core count to the P-Cores, setting affinity to all cores. You can have indirect benefit in games - in this case by setting all the extra background apps stuff instead to the E-Cores (and exclude such apps from the P-Cores), i.e, so that the E-Cores can "carry that burden" off of the P-Cores, to make these last ones more "prepared" and even stronger for your games (and then set games on the P-Cores only). For example, placing every little extra app running in the background (that is, ones not of Windows OS) only on the E-Cores (f.ex, Discord, HWINFO, Afterburner, RivaTuner, the controllers and peripherals software, AntiVirus, Firewall, etc, etc) while gaming with all the P-Cores free of the burden or hiccups of those programs. Therefore the P-Cores become unnaffected by that stuff, clean and lean to run any games set exclusively on them. <--- a major benefit so often uncomprehended. Just remember to always keep the Windows OS processes as Process Lasso already have them (let it do its own magic with those) - this is key, IMO. So, resuming, the P-Cores and E-Cores duality can be the best of both worlds, all depending on situation. It's about separating or combining tasks, or not at all (in the P-Cores/E-Cores environment) depending on the purpose and, effectively, getting the most out of the system, almost ideally in my opinion. It's all a matter of setting such rules if necessary, and only once, from them on automatized - and why Process Lasso is so good for these processors. Now back to Process Lasso..... Opinions vary but, for me and for gaming, the things that I always do (and needs to be done only once) for all my games is: Got to "Options / Power / Performance Mode" and include the game's executable (browse to game's directory and add the game's .EXE to list). This enables a higher power plan (performance benefits) whenever you run such program (game, etc) added to that list, reverting to whatever that was when you close it. Run the game for the first time... Alt+Tab... back to Process Lasso, and in the game's executable (listed among all the others), right click and change or tick these options: - CPU Affinity / "Always" / "Select CPU Affinity" / change to "P-Cores" and click ok - Induce Performance Mode (checked) - Exclude from ProBalance (checked) - More / "Disable Smart Trim" and "Disable Idle Saver" ProBalance (both checked) *side note: there's the odd game that likes to run in higher priority -- for that, if necessary, also do "CPU Priority" / "Always" / change to "Above Normal" After all set, I select the option "Restart" in the game's executable, in Process Lasso (to restart the game in full screen and see how it goes). If all is good... those settings were already automatically saved, will be automatized for the next time. In my experience, only a few games benefit (or even require) the use all of the cores (P-Cores + E-cores). For example, I recall so with most recent open world games by Ubisoft (AC Origins even requires so, or won't even load, IIRC). But all games that so far I've ran (appart from those odd ones), all including DCS benefit from having assigned only the P-Cores. Which makes sense, because these are much stronger and it then maintains a steady higher clock and IPC. BTW, and specifically in DCS' case, there have been ocasions (depends on release/update) where I felt (YMMV) that leaving one of the P-Cores unnasigned (or unticked in affinity) somehow helps with frametimes in VR. I have no idea why. In this particular case, what I mean is leaving all P-Cores enabled, except for the first P-core/thread (CPU0 and CPU1) or the last P-core/thread (CPU14 and CPU15). And again, no E-Cores enabled for it, only P-Cores for DCS.
  5. Good to know that the problems seem to have been surpassed. Yes, with bigger and heavier GPUs, that is an increasing problem. The vertical mount with a good respective kit of bracket and ribbon can be a solution, instead of having it in the traditional horizontal position (directly on the motherboard). No more sag causing pressure and bowing on the GPU's PCB, or pressure on its PCIe connector. That is, so long that it is one placed with distance from the side cover of the PC case, to keep good ventilation. Some vertical mounting solutions put the GPU too close to the side cover, and then it overheats because the GPU fans don't have enough room to feed air in properly. The Phanteks one he recommends is good, but not sure I'd recommend it for "fat" GPUs like the new 3-slot(+) high-end models (RTX4080/4090, RX7900XT/XTX, etc). I really like the Coolermaster kit V3: https://www.coolermaster.com/en-global/products/vertical-gpu-holder-kit-v3/ LianLi one is also good, and there are others. But it's really a matter of selecting the right kit for each case. But... the vertical mount of GPU is not always possible or prefered - it depends on invididual case! That's why GPU brace supports, or sag holder brackets, are still being used so much. With bigger and heavier GPUs, also due to misuse, bowing still becomes a problem. If it's for systems with the traditional horizontal position of GPU, I use the simplest telescopic GPU supports available (pretty much same as these). And also use it in my own personal gaming system (it's been years now, no issues - "good for you" like he says! LOL). There is a catch/method though, precisely to avoid bowing - that should be placed at the middle of the GPU, not at the very end of the GPU like you'll see most often done. Also, its height adjustment (with the screw/unscrew for up/down) should be made as to have the GPU exactly level (to the PCIe connector & slot), never overdone, to not create any contrary effect and avoid that bowing he described.
  6. That's one of them, yep. still beyond my budget... Maybe next year, who knows.
  7. Now that's a more relevant point. And one that can divide opinions (each to his/her own). I personally prefer a glossy panel to a matt panel, because the "matt treatment" (term?) usually dilutes a bit the color accuracy and almost gives a "milky" tone to things (some may be less sensitive to it but I really dislike it). At least in my experience with 5+ year old models, not sure how it is with latest "higher end" ones. Buuuut... of course, the downside of a very glossy screen is that any sort of light behind you will glare/reflect on the screen imediately. The upsides are substancial though (IMO ) - dark to light (and vice-versa) colors transitions get to be better, more so if in a darker room. And that makes sense on an OLED. That monitor you've chosen is not "mirror glass reflection" like old ones used to be, these high-end ones now have reflection and glare reduction (it's almost "mid term"). At least you aren't hooked on the bigger screens like I am... I can almost ear myself doing all sorts of funny whining sounds when looking at these latest 48'' OLED monitors... and their price tag. *sigh* LOL Indeed, OLED is especially expensive but.... aaaaawww ... that crisp image, the zero latency, beautiful colors perception, and dark tones that are really dark (as should be). With a monitor with such specs in addition to OLED tech, you'll be blown away, guaranteed. If that's the size and specs you want, and you can afford it, I'd say "hey, you only live once..."
  8. If text is really so important, then none will really be a good choice (that would be a 5K Samsung ViewFinity S9 or Apple Studio Display, or equivalents). If you want an OLED 1440P gaming monitor, and your only real negative point towards it is the Subpixel Layout, then I'm afraid you'll encounter a similar issue in all of them. AFAIK, they're all either RWBG, or WRGB, or triangular RGB. So, none is "wow, great" on text. Personally, I don't think it's such a big deal. It's nothing like BGR (aka "inverted RGB", much worse - I know because I got one here) and you get used to it very quickly. And if you don't, you can also circumvent the problem more or less. With apps such as these, for example: Better ClearType Tuner: https://github.com/bp2008/BetterClearTypeTuner Mactype: https://www.mactype.net/ If viewing/testing before buying isn't an option, getting it from somewhere where return policy is somewhat "easier" (Amazon?) could be a solution.
  9. It was pushed back, but it'll be released in less than 5 months (supposedly before CES 2025, which is in early January). It is also rumoured that, this time around, the RTX 5080 will come out first (maybe with the RTX 5070 as well), only later is the RTX 5090 expected to be released. I'd wait for the RTX 5080... but I'm not you. If you're not using it for VR, I think the RTX 3060Ti (with limitations, I know I had one) will at least hold on somewhat fine until then (update DLSS and use it in the game!). But if it's for VR, yeah it's more complicated.
  10. Yep, agreed. As much as I like my big screen with headtracking, once you taste DCS in VR (if in good conditions) nothing else feels as good. The meme "once you taste VR you never go back" was real in my case. DCS is one of the sims that really makes all the sense to use in VR (you're there, in the cockpit). Buuuuut.... it is very demanding on hardware - too much, IMO. When things get too heavy or complicated, if you fly mostly SinglePlayer (not MultiPlayer) and don't use recently released modules and maps, then I'd suggest trying DCS 2.5.6 (link in my signature). It's a three year (plus) older version of DCS with simpler shaders and without the new clouds system. Much lighter on resources (about 30% less GPU usage, about the same also for less RAM and VRAM usage). Made all the difference for me (butter smooth, which never was since 2.7, still isn't with latest 2.9) and, as I fit that "offline player, non-recent modules" profile, never looked back. Sure, it sucks to miss newer and upcoming modules that do interest me, but at least in VR it all finally works smooth (can even crank up details and resolution) and don't need to upgrade the PC, nor worry about another game update breaking this or affecting that - I finally just enjoy it. And if you have friends using it too, it also runs great online.
  11. You can never know for sure, it may or may not happen. That said, I notice a lot of people overstressing and getting ansiety for this, it's out of proportions, IMO. You have a device (whatever hardware part), use it and enjoy it - that's what its purpose is after all. It has a warranty, use it if needed - it breaks and isn't your fault, RMA must be served for a replacement. Basically what I said in my previous reply, and quoting:
  12. Nice numbers there! "Lets goooooo, get to da chopaaa"
  13. That really looks like a superb monitor, but yeah.... Subpixel Layout is RWBG (some color fringing around text). But, even so, may be such a small issue that makes no difference, if that's the monitor that you're really looking for. I like the aproach and format these guys use on their reviews (best in the biz, IMO) - and the text-clarity section is always important to look at: https://www.rtings.com/monitor/reviews/asus/rog-strix-oled-xg27aqdmg Hope it can it help in any way.
  14. I understand, but if this business ALSO depends on hobbyists (and increasingly so), then this connector is a resounding failure. Because it needs to be 100% safe AND idiot proof. And it is neither. ....not funny dealing with this after paying $1000 plus. Perhaps I should leave that sort of opinion for someone who is (I believe to be) one of the few yutuberzz to often be 99,9% correct in whatever PC matters... If you're short on time or patience, skip to 13:27 time of video: ^^ ....I rest my case.
  15. Thread derailed somewhat (again!) but hey Not sure how accustomated you are with electronics on motorcycles, but it's very(!) often the case that they "budget" the voltage regulators and wire gage. They crap out at about 20.000 kms or so, and that's in a very large number of motorcycles, higher end models inclusively. And has always been so, even if you can buy aftermarket equivalent parts that are far better (and long lasting reliable), which should have been there right from the start. If multimillion budget capable and reknowned manufacturers do them for vehicles and still get away with it (and knowing it's like that, with decades of bad experiences), so do too PC hardware manufacturers - believe it. It's the triumph of the bean counters....
  16. The "conspiracy theory" is how EVGA built their prototype is also how they adviced it to be, period, but it was more expensive to manufacturer. They decided on a much bigger cooler to circumvent the 450W+ resultant heat, and that would get in the way of construction costs, when it's already a very expensive GPU. So, silly monumental size won and "no no, no can do on connector in that place... eff off". The fact that it's a much bigger problem in the 4090s than it is in the 4080s (not even 10% versus of the RMA for that issue, according to my sources) also exhacerbates how badly it was planned and done. IMO, it should never been more than 300W in that 12VHPWR connector (total power includes PCIe slot socket) yet they went ahead anyway with a single one even on models getting close to 600W - (IMO) should have been two connectors, not just one, but I guess even that would get in the way of lucrative returns... lol
  17. If the connector was pointing up/down (or had a "L" convertor like those currently sold by 3rd parties) that wouldn't been a problem.... Being far of the "hot zone" (and usually where the intake fans of most ATX cases are) would have prevented a pretty big part of the "heat" issues....
  18. Not entirely disagreeing, but my point is - there was absolutely nothing wrong with PCIe 6+2 connectors. How many issues you've seen with them? (I never seen any in 20+ years that connector has been used) Also, the location of the 12VHPWR in the GPU itself, for ALL of the RTX4090s in the market, is wrong. It should never have been within/below/above the PCB, but at the end (i.e, at the side) of the PCB. Because the connector, as it is currently, is then being heated by the fans exhausts in addition to all the massive electric current already going in that single connector. If only EVGA never pulled the "we quit" action, and went ahead with their own idea of how a RTX4090 should be (my guess is they knew all along), I think half of the issues would've never ocurred.... That's their RTX4090 FTW3 prototype, which never went through production. (they've decided to quit right before the RTX4000 series launch) Picture taken from the video: youtu.be/tYzJf71WUcM
  19. If black screens occur with undervolt then, yeah, matter of optimizing curve (a little less clock perhaps). But if 90% is doing ok, then whatever works best for you. And yes, Cultist's PSU Tier List and especially anything HWBusters informs is reliable (as good as the good old JohnnyGuru website, IMO). So, any recommendations there can be trusted. BTW, just my opinion but, if you ever consider replacing the MSI 1000G (already a great PSU), then might as well go the extra mile and pick a good 1250W+ PSU. Pretty expensive and somewhat overkill, yes, but with a high-end system like yours, I personally think it pays up in the longer term (may even be reused again on next system). https://hwbusters.com/best_picks/best-atxv3-pcie5-ready-psus-picks-hardware-busters/7/ All that said, I do agree with @kksnowbear above. Also, it looks to me as well that the PSU Cable seems to have a bit of a tight bend there, right before the GPU connector. Could be it(?). As I read that you've ordered the thermal grizzly wireview (nice one!), it will perhaps also help aliviate that common issue. As side note, I know it's not everybody having the problem but... if it's enough of an issue for so many, then it shows to be an issue with the concept (not of user handling). Not so sure I'm alone when I say that the 12VHPWR connector is among the worst things Nvidia did in many years...
  20. My experience with AMD in VR is only with older models, the RX5700XT and RX6900XT, but I have to say that it was not a good one ("tear" artifacts, bad frametimes, so many darn issues, which back then imediately went away with an RTX3060Ti). Nvidia has been and still is ahead in the VR camp (IMO), noticeably so in my experience (zero issues). But if there's no VR in the plans, and it's for a single 1440P monitor (not really meant for 4K res., but can also do), then I'd say the 7900GRE and 4070Super are both very good. In that, and again like I said in my previous post, you have to ask yourself "that" question........ PS: $70 becomes pocket change, if it's something you value to last more than a year in use.
  21. Personally, I don't see the point in paying so much (550,00€ in my area) for the RX 7800 XT, which in practice gives exact same performance as the older RX 6800 XT. So, for me, it's automatically excluded. Between the RX 7900 GRE 16GB and the RTX 4070 Super 12GB, now that's not as easy to decide. If it's not for VR, and if it's a 1440P res. monitor, both will work really well, even with DCS. Personally, even after a heavy bias for ATI / AMD, for many years*, my preference is Nvidia. (*and boy, how I kept with the RX5700XT through that very dark first year until its drivers were sorted!) But these days even FSR3+ is close enough to DLSS, and Adrenalin drivers are okay. I think you should ask yourself this question... Do you want a very familiar experience to what you have had with the RTX2070, usage wise, but better and with a LOT more performance? If so, get the RTX 4070 Super 12GB. Do you feel like changing to something different, with no problems to quickly adapt to unfamiliar things? If so, the RX 7900 GRE 16GB may be right for you.
  22. Huh what?? Are you crazy? Would I recommend an Intel i7 12700K in 2024 for 200$, brand new? ...which works with Z690 and Z790 motherboards, both available for DDR5 and DDR4 RAM ? (i.e, you can reuse your older RAM!) ...which is faster most of the time than the newer AMD AM5 Ryzen 7700X, 7800X and 9700X, for considerably less money ? ...a 170W Intel "K" 12-core (8/16 P + 4 E) that also overclocks like a champ, even with a simple $40.00 dual-tower air cooler ? (TR Peerless Assassin, Phantom Spirit, etc) ...and hasn't any of this recent degradation BS ? Frak yeah, of course I do!!! a million times! Best processor for the money, by far and large (it's not even close!). This is coming from someone who has repeatedly done top builds with the ultra hyped 13900K, 13700K, 14700K, 5800X3D and 7800X3D, among others - the i7 12700K is an absolute gem, and the most overlooked and underrated CPU in this "yutuberzz-influencerzz" biased market. Makes the 5800X3D absolutely atrocious in price/performance. And similar can be said for the i9 12900K (another gem, though this one is still noticeably dearer than the i7 12700K). So much so that I put my money where my mouth is, and brought one last year for myself. (170$ from used market, and a Z690 TUF D4 for 140$) So good in fact that I don't really see any point worth in upgrading yet.
  23. Looks good, but............ there's three items I'd personally recommend changing. One because it's better to avoid (and worth spending just a little more), and then two other items which I just think there's as good or better alternatives for similar or lower price. 1. GPU (graphics card). For the GPU you've chosen the RTX 4080 Super (very good choice). But they're not all the same, the quality of internal components and cooling vary quite a bit between models. The MSI Ventus 3X OC is one of the models to avoid (yes the price looked good, I know). While it's not "ooh it's really bad", there's much better for just a little more money. For example, three specific models that I've used and found worth recommending are the Gigabyte Aorus Master, Aero OC, and Gaming OC: Gigabyte GAMING OC GeForce RTX 4080 SUPER 16 GB (black color only, $1050.00): https://pcpartpicker.com/product/mXNYcf/gigabyte-gaming-oc-geforce-rtx-4080-super-16-gb-video-card-gv-n408sgaming-oc-16gd Gigabyte AERO OC GeForce RTX 4080 SUPER 16 GB (white/silver color only, $1100.00): https://pcpartpicker.com/product/94hv6h/gigabyte-aero-oc-geforce-rtx-4080-super-16-gb-video-card-gv-n408saero-oc-16gd Gigabyte AORUS MASTER GeForce RTX 4080 SUPER 16 GB (black/silver color only, $1200.00): https://pcpartpicker.com/product/8ppQzy/gigabyte-aorus-master-geforce-rtx-4080-super-16-gb-video-card-gv-n408saorus-m-16gd 2. Storage (NVME). You picked the Samsung 980Pro 2TB as main drive, and the WD SN770 2TB as complementary storage drive. Nothing wrong with that (I think I even recommended it?). But, since my last reply, I've built a system with a newer NVME gen4 (with DRAM) from Solidigm, the P44 PRO, and it impressed me a lot. The performance is absolutely top notch and temperatures are lower (better!) than what I've seen from top competitors, a huge plus. Prices vary a lot from place to place, but there are really good promos on Amazon. And that's why I'd recommend you to pick two units of this instead: SOLIDIGM P44 PRO 2TB (from $139.00): https://pcpartpicker.com/product/X8nypg/solidigm-p44-pro-2-tb-m2-2280-pcie-40-x4-nvme-solid-state-drive-ssdpfkkw020x7x1 3. PC (ATX) Case. The Fractal Torrent is a nice case, but a tad overrated and (IMO) a bit too expensive. I also think it's too big for that type of system. There are great alternatives at far more affordable prices, available in Black or White, some even with ARGB fans+controller (also available w/o ARGB fans+controller, if prefered). For example, check these two below, with linked youtube reviews to get an idea of the details on them, see if it interests you. LIAN LI LANCOOL 216 (ARGB fan controller included) -> M.U. REVIEW - Black ($120.00) : https://pcpartpicker.com/product/PG88TW/lian-li-lancool-216-rgb-wcontroller-atx-mid-tower-case-lancool-216rc-x - White ($125.00) : https://pcpartpicker.com/product/JG88TW/lian-li-lancool-216-rgb-wcontroller-atx-mid-tower-case-lancool-216rc-w MONTECH AIR 903 MAX (ARGB fan controller included) -> M.U. REVIEW - Black ($70.00): https://pcpartpicker.com/product/2MwmP6/montech-air-903-max-atx-mid-tower-case-air-903-max-b - White ($80.00): https://pcpartpicker.com/product/bQGhP6/montech-air-903-max-atx-mid-tower-case-air-903-max-w
  24. @nephilimborn Looks like power delivery issue, either by PSU or GPU cable connector, or the cable itself (?). Can't pronounce about CableMod 12VHPWR connectors as I don't have experience with those, though I've repeatedly see that latest revisioned models are much better quality. I see people mentioning the Corsair GPU Power Bridge and the Thermal Grizzly WireView GPU as well, may be good alternatives. Again, I have no experience with those. The problems still reported with burning connectors I think are increasingly more related to the design of the connector in the RTX4090 itself. (IMO, should have been two connectors, not one!) Meanwhile, try undervolting the RTX4090, you get at least a 20% reduction in power consumption, and only a ~2% reduction in performance (very good trade-off). Various tutorials in youtube. Among plenty others, these two for example:
  25. Buildzoid! He may sound like a nerd rambling but his videos often show very interesting facts with his experiments (f.ex, latest oscilloscope videos on Intel K chips). Once you sort those issues, and if not done already, consider stopping the single/dual core boost from happening, because of its 1.5v+ voltage spikes (one of the main culprits for the current 13th/14th Gen degradation issues). Easiest way to do this is by sync'ing (locking) your P-Cores all at same max possible clock (close to what the "All P-Cores max clocks" is out-of-the-box). Even better if with the Cpu Core Voltage (Vcore) limited to lower values, at around 1.35v (or below). You can set a limit of voltage, in the BIOS setting "IA VR Voltage Limit" with a value between 1350 and 1400 mv. Or you can manually adjust the Cpu Core Voltage (Vcore), either making it by "fixed" or by "offset" voltage adjustment (whichever way you prefer). One way to look at this is like the undervolt that so many also do on high-end GPUs. It prolongs its life, by lowering the voltage and temps. In this particular case with Intel 13th/14th gen, it's (IMO) a very good procedure to drastically mitigate the possible degradation, and doesn't really affect general performance.
×
×
  • Create New...