Jump to content

LucShep

Members
  • Posts

    1687
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by LucShep

  1. You can never know for sure, it may or may not happen. That said, I notice a lot of people overstressing and getting ansiety for this, it's out of proportions, IMO. You have a device (whatever hardware part), use it and enjoy it - that's what its purpose is after all. It has a warranty, use it if needed - it breaks and isn't your fault, RMA must be served for a replacement. Basically what I said in my previous reply, and quoting:
  2. Nice numbers there! "Lets goooooo, get to da chopaaa"
  3. That really looks like a superb monitor, but yeah.... Subpixel Layout is RWBG (some color fringing around text). But, even so, may be such a small issue that makes no difference, if that's the monitor that you're really looking for. I like the aproach and format these guys use on their reviews (best in the biz, IMO) - and the text-clarity section is always important to look at: https://www.rtings.com/monitor/reviews/asus/rog-strix-oled-xg27aqdmg Hope it can it help in any way.
  4. I understand, but if this business ALSO depends on hobbyists (and increasingly so), then this connector is a resounding failure. Because it needs to be 100% safe AND idiot proof. And it is neither. ....not funny dealing with this after paying $1000 plus. Perhaps I should leave that sort of opinion for someone who is (I believe to be) one of the few yutuberzz to often be 99,9% correct in whatever PC matters... If you're short on time or patience, skip to 13:27 time of video: ^^ ....I rest my case.
  5. Thread derailed somewhat (again!) but hey Not sure how accustomated you are with electronics on motorcycles, but it's very(!) often the case that they "budget" the voltage regulators and wire gage. They crap out at about 20.000 kms or so, and that's in a very large number of motorcycles, higher end models inclusively. And has always been so, even if you can buy aftermarket equivalent parts that are far better (and long lasting reliable), which should have been there right from the start. If multimillion budget capable and reknowned manufacturers do them for vehicles and still get away with it (and knowing it's like that, with decades of bad experiences), so do too PC hardware manufacturers - believe it. It's the triumph of the bean counters....
  6. The "conspiracy theory" is how EVGA built their prototype is also how they adviced it to be, period, but it was more expensive to manufacturer. They decided on a much bigger cooler to circumvent the 450W+ resultant heat, and that would get in the way of construction costs, when it's already a very expensive GPU. So, silly monumental size won and "no no, no can do on connector in that place... eff off". The fact that it's a much bigger problem in the 4090s than it is in the 4080s (not even 10% versus of the RMA for that issue, according to my sources) also exhacerbates how badly it was planned and done. IMO, it should never been more than 300W in that 12VHPWR connector (total power includes PCIe slot socket) yet they went ahead anyway with a single one even on models getting close to 600W - (IMO) should have been two connectors, not just one, but I guess even that would get in the way of lucrative returns... lol
  7. If the connector was pointing up/down (or had a "L" convertor like those currently sold by 3rd parties) that wouldn't been a problem.... Being far of the "hot zone" (and usually where the intake fans of most ATX cases are) would have prevented a pretty big part of the "heat" issues....
  8. Not entirely disagreeing, but my point is - there was absolutely nothing wrong with PCIe 6+2 connectors. How many issues you've seen with them? (I never seen any in 20+ years that connector has been used) Also, the location of the 12VHPWR in the GPU itself, for ALL of the RTX4090s in the market, is wrong. It should never have been within/below/above the PCB, but at the end (i.e, at the side) of the PCB. Because the connector, as it is currently, is then being heated by the fans exhausts in addition to all the massive electric current already going in that single connector. If only EVGA never pulled the "we quit" action, and went ahead with their own idea of how a RTX4090 should be (my guess is they knew all along), I think half of the issues would've never ocurred.... That's their RTX4090 FTW3 prototype, which never went through production. (they've decided to quit right before the RTX4000 series launch) Picture taken from the video: youtu.be/tYzJf71WUcM
  9. If black screens occur with undervolt then, yeah, matter of optimizing curve (a little less clock perhaps). But if 90% is doing ok, then whatever works best for you. And yes, Cultist's PSU Tier List and especially anything HWBusters informs is reliable (as good as the good old JohnnyGuru website, IMO). So, any recommendations there can be trusted. BTW, just my opinion but, if you ever consider replacing the MSI 1000G (already a great PSU), then might as well go the extra mile and pick a good 1250W+ PSU. Pretty expensive and somewhat overkill, yes, but with a high-end system like yours, I personally think it pays up in the longer term (may even be reused again on next system). https://hwbusters.com/best_picks/best-atxv3-pcie5-ready-psus-picks-hardware-busters/7/ All that said, I do agree with @kksnowbear above. Also, it looks to me as well that the PSU Cable seems to have a bit of a tight bend there, right before the GPU connector. Could be it(?). As I read that you've ordered the thermal grizzly wireview (nice one!), it will perhaps also help aliviate that common issue. As side note, I know it's not everybody having the problem but... if it's enough of an issue for so many, then it shows to be an issue with the concept (not of user handling). Not so sure I'm alone when I say that the 12VHPWR connector is among the worst things Nvidia did in many years...
  10. My experience with AMD in VR is only with older models, the RX5700XT and RX6900XT, but I have to say that it was not a good one ("tear" artifacts, bad frametimes, so many darn issues, which back then imediately went away with an RTX3060Ti). Nvidia has been and still is ahead in the VR camp (IMO), noticeably so in my experience (zero issues). But if there's no VR in the plans, and it's for a single 1440P monitor (not really meant for 4K res., but can also do), then I'd say the 7900GRE and 4070Super are both very good. In that, and again like I said in my previous post, you have to ask yourself "that" question........ PS: $70 becomes pocket change, if it's something you value to last more than a year in use.
  11. Personally, I don't see the point in paying so much (550,00€ in my area) for the RX 7800 XT, which in practice gives exact same performance as the older RX 6800 XT. So, for me, it's automatically excluded. Between the RX 7900 GRE 16GB and the RTX 4070 Super 12GB, now that's not as easy to decide. If it's not for VR, and if it's a 1440P res. monitor, both will work really well, even with DCS. Personally, even after a heavy bias for ATI / AMD, for many years*, my preference is Nvidia. (*and boy, how I kept with the RX5700XT through that very dark first year until its drivers were sorted!) But these days even FSR3+ is close enough to DLSS, and Adrenalin drivers are okay. I think you should ask yourself this question... Do you want a very familiar experience to what you have had with the RTX2070, usage wise, but better and with a LOT more performance? If so, get the RTX 4070 Super 12GB. Do you feel like changing to something different, with no problems to quickly adapt to unfamiliar things? If so, the RX 7900 GRE 16GB may be right for you.
  12. Huh what?? Are you crazy? Would I recommend an Intel i7 12700K in 2024 for 200$, brand new? ...which works with Z690 and Z790 motherboards, both available for DDR5 and DDR4 RAM ? (i.e, you can reuse your older RAM!) ...which is faster most of the time than the newer AMD AM5 Ryzen 7700X, 7800X and 9700X, for considerably less money ? ...a 170W Intel "K" 12-core (8/16 P + 4 E) that also overclocks like a champ, even with a simple $40.00 dual-tower air cooler ? (TR Peerless Assassin, Phantom Spirit, etc) ...and hasn't any of this recent degradation BS ? Frak yeah, of course I do!!! a million times! Best processor for the money, by far and large (it's not even close!). This is coming from someone who has repeatedly done top builds with the ultra hyped 13900K, 13700K, 14700K, 5800X3D and 7800X3D, among others - the i7 12700K is an absolute gem, and the most overlooked and underrated CPU in this "yutuberzz-influencerzz" biased market. Makes the 5800X3D absolutely atrocious in price/performance. And similar can be said for the i9 12900K (another gem, though this one is still noticeably dearer than the i7 12700K). So much so that I put my money where my mouth is, and brought one last year for myself. (170$ from used market, and a Z690 TUF D4 for 140$) So good in fact that I don't really see any point worth in upgrading yet.
  13. Looks good, but............ there's three items I'd personally recommend changing. One because it's better to avoid (and worth spending just a little more), and then two other items which I just think there's as good or better alternatives for similar or lower price. 1. GPU (graphics card). For the GPU you've chosen the RTX 4080 Super (very good choice). But they're not all the same, the quality of internal components and cooling vary quite a bit between models. The MSI Ventus 3X OC is one of the models to avoid (yes the price looked good, I know). While it's not "ooh it's really bad", there's much better for just a little more money. For example, three specific models that I've used and found worth recommending are the Gigabyte Aorus Master, Aero OC, and Gaming OC: Gigabyte GAMING OC GeForce RTX 4080 SUPER 16 GB (black color only, $1050.00): https://pcpartpicker.com/product/mXNYcf/gigabyte-gaming-oc-geforce-rtx-4080-super-16-gb-video-card-gv-n408sgaming-oc-16gd Gigabyte AERO OC GeForce RTX 4080 SUPER 16 GB (white/silver color only, $1100.00): https://pcpartpicker.com/product/94hv6h/gigabyte-aero-oc-geforce-rtx-4080-super-16-gb-video-card-gv-n408saero-oc-16gd Gigabyte AORUS MASTER GeForce RTX 4080 SUPER 16 GB (black/silver color only, $1200.00): https://pcpartpicker.com/product/8ppQzy/gigabyte-aorus-master-geforce-rtx-4080-super-16-gb-video-card-gv-n408saorus-m-16gd 2. Storage (NVME). You picked the Samsung 980Pro 2TB as main drive, and the WD SN770 2TB as complementary storage drive. Nothing wrong with that (I think I even recommended it?). But, since my last reply, I've built a system with a newer NVME gen4 (with DRAM) from Solidigm, the P44 PRO, and it impressed me a lot. The performance is absolutely top notch and temperatures are lower (better!) than what I've seen from top competitors, a huge plus. Prices vary a lot from place to place, but there are really good promos on Amazon. And that's why I'd recommend you to pick two units of this instead: SOLIDIGM P44 PRO 2TB (from $139.00): https://pcpartpicker.com/product/X8nypg/solidigm-p44-pro-2-tb-m2-2280-pcie-40-x4-nvme-solid-state-drive-ssdpfkkw020x7x1 3. PC (ATX) Case. The Fractal Torrent is a nice case, but a tad overrated and (IMO) a bit too expensive. I also think it's too big for that type of system. There are great alternatives at far more affordable prices, available in Black or White, some even with ARGB fans+controller (also available w/o ARGB fans+controller, if prefered). For example, check these two below, with linked youtube reviews to get an idea of the details on them, see if it interests you. LIAN LI LANCOOL 216 (ARGB fan controller included) -> M.U. REVIEW - Black ($120.00) : https://pcpartpicker.com/product/PG88TW/lian-li-lancool-216-rgb-wcontroller-atx-mid-tower-case-lancool-216rc-x - White ($125.00) : https://pcpartpicker.com/product/JG88TW/lian-li-lancool-216-rgb-wcontroller-atx-mid-tower-case-lancool-216rc-w MONTECH AIR 903 MAX (ARGB fan controller included) -> M.U. REVIEW - Black ($70.00): https://pcpartpicker.com/product/2MwmP6/montech-air-903-max-atx-mid-tower-case-air-903-max-b - White ($80.00): https://pcpartpicker.com/product/bQGhP6/montech-air-903-max-atx-mid-tower-case-air-903-max-w
  14. @nephilimborn Looks like power delivery issue, either by PSU or GPU cable connector, or the cable itself (?). Can't pronounce about CableMod 12VHPWR connectors as I don't have experience with those, though I've repeatedly see that latest revisioned models are much better quality. I see people mentioning the Corsair GPU Power Bridge and the Thermal Grizzly WireView GPU as well, may be good alternatives. Again, I have no experience with those. The problems still reported with burning connectors I think are increasingly more related to the design of the connector in the RTX4090 itself. (IMO, should have been two connectors, not one!) Meanwhile, try undervolting the RTX4090, you get at least a 20% reduction in power consumption, and only a ~2% reduction in performance (very good trade-off). Various tutorials in youtube. Among plenty others, these two for example:
  15. Buildzoid! He may sound like a nerd rambling but his videos often show very interesting facts with his experiments (f.ex, latest oscilloscope videos on Intel K chips). Once you sort those issues, and if not done already, consider stopping the single/dual core boost from happening, because of its 1.5v+ voltage spikes (one of the main culprits for the current 13th/14th Gen degradation issues). Easiest way to do this is by sync'ing (locking) your P-Cores all at same max possible clock (close to what the "All P-Cores max clocks" is out-of-the-box). Even better if with the Cpu Core Voltage (Vcore) limited to lower values, at around 1.35v (or below). You can set a limit of voltage, in the BIOS setting "IA VR Voltage Limit" with a value between 1350 and 1400 mv. Or you can manually adjust the Cpu Core Voltage (Vcore), either making it by "fixed" or by "offset" voltage adjustment (whichever way you prefer). One way to look at this is like the undervolt that so many also do on high-end GPUs. It prolongs its life, by lowering the voltage and temps. In this particular case with Intel 13th/14th gen, it's (IMO) a very good procedure to drastically mitigate the possible degradation, and doesn't really affect general performance.
  16. The CPU degradation can appear in somewhat different ways from one machine to another. It can manifest by Windows closing applications in the background by itself, or BSOD (blue screen crashes), or system lock up (freezes), black screens, general instability, etc. "Black Screen, Fans 100 percent, hard-reset or power-cycle is the only option to restart" is one of the reported symptoms. Though it could be so many unrelated things (corrupted Windows and/or applications files or drivers, faulty PSU, or motherboard, or GPU, or Drive, etc). That's why troubleshooting with stress-testing is important, to try determining what's causing it. I maintain my previous suggestion.
  17. I was reading the OT and thinking the same, that it could be early signs of the now well known 13th/14th CPU degradation.... @nephilimborn have you updated your motherboard BIOS to latest version, for the new Intel microcode? (if you haven't, you should) Not sure if you're willing to try something with CPU settings in your motherboard BIOS, just for a test.... If you are, then try to sync (i.e, lock) all your P-Cores, and use a manual clock value that is same for all P-Cores (5.3 GHz is the stock "all P-Core" maximum clock for i7 13700K) Repeat testing, if it still does the same thing, go back to BIOS and reduce 100 MHz in that "all P-Core" clock (so, now to 5.2 GHz) and try again. ...repeat and so on.... If at some point the problem stops happening (by lowering the P-Cores clock), then you may have a CPU that has started to degrade, not being able to reach the stock ultra high "boost" clocks with stock voltages. (note: do not increase the CPU Core Voltage to reach the stock boost clocks, as it just makes things worse!) Also, you mentioned not having yet disabled C-States, which is good because you should never disable those. What you may do, if intended, is reduce the limit of C-States (for example, to "C3" instead of "Auto" - which can go to "C10" deepest savings limit) - Intel C-States explained.
  18. Performance is not a problem with the new microcode. Nor temperatures. It's the voltages. The frigging 1.5v+ voltage spikes b!tch slapping the poor processor so hard, to the point of slow unpredictable degradation. They're still there. That's the price to pay for overambitious boosts on 13th/14th gen => stupic stock high voltages and spikes that slowly eat away your processor's life. Even the reliable 12th gen i9 and i7, which aren't affected at all by these degradation issues, will also slowly degrade if you also force them stupid high voltages. People with 13th and 14th Gen 65W+ CPUs, even after installing the new microcode, have two choices: Keep things stock on clocks, boosts and voltages, and enjoy it as it is. The new microcode at least ensures a slower degradation than before. But it will very likely still occur, sooner or later. (in six months? five years? ...who knows?) If it indeed goes "kaput", well... let's hope it's within the RMA time period and a new replacement is accepted. or Lock the P-Cores (i.e, limiting them all to same max possible clock), so that there is no single/dual-core boost. Reduce the CPU core voltage to about 1.35v (or lower). Worst scenario, you might have to lower the "all P-Core clock" a hundred MHz less than stock. Which won't make any difference to whatever use (unless it's competitive benchmarking?) and the degradation is drastically mitigated. ....as should've been done imediately when this problem appeared (IMO).
  19. The issues with single/dual core voltage spikes and ~1.55v Vcore can't be addressed, unless it's done by the user. They can't bring down the voltages to "normal" levels, because that wouldn't allow boosts to go high as marketed (as in spec sheets). Just like they can't disable the single/dual core boost (which would imediately resolve most of the problem!). Because that would represent changing a product "after the fact", when it was already presented, marketed, and sold as that. The single/dual core boost and high "guaranteed" clocks are features, that they marketed and spec'ed for the product. It would be admiting a grave mistake, almost like "it's a scam product", and taking defeat (law suits and indemnifications would imediately go through the roof! ). Unfortunately, the possible solutions/mitigations are something that you'll have to do on your own, for your own. "Dew it!!"
  20. That's a tough one to know for sure. It'll depend on how the motherboard sensors are being read by each different monitoring software. IMO, HWINFO is usually more accurate than HWMonitor but, again, it'll also depend on the motherboard sensors, not just the software. For some reason that I haven't understood, one that usually shows a fairly accurate reading of the Vcore is CPU-Z (even though it's not really a monitoring software). You might have noticed (as said before in this thread, videos showing it and all) that the single/dual core voltage spikes (always going 1.5v+) have not been resolved by the new Intel microcode. The main problem still exhists. So, if it's fully back to stock BIOS values, then it'll be reaching ~1.55v when it boosts, regardless of what you see in any monitoring software. And that will still continue provoking degradation, albeit at a slower pace than with previous microcode.
  21. Nice one. Yeah, so long as you don't mind the 60Hz limitation, the LG UR and UT 7000/8000 are among the affordable and decent 4K TVs that can be used for such purpose (flight sims). I'm pretty sure you imediately understood why the picture size (both vertical and horizontal) provided by a 16:9 big screen makes all the sense, for flight-simming purposes. It's just the way everything becomes much more "real" scale wise. There's no way one goes back to a smaller screen after the experience (the very old 22'' I had around really looked like a tablet in comparison!). At the right (somewhat close) distance and with some headtracking, it's a great alternative to VR (if that's not an option) and far, far easier to run.
  22. Ok, I see now. It was about the backwards philosophy, or take on the aproach, and not really particularly focused in the gen architectures innovations per se. My apologies to @Hiob then, as I didn't "get the joke". I agree with all of that. As I said, that's where l give all the credit to AMD for the 3D V-Cache on the X3D chips. It was, IMO, the biggest innovation I've seen in a long, long time. It makes sense and it absolutely works for the purpose (gaming). But then, as always, it's AMD with missed opportunities. And why I think they never were, aren't, probably will never be, as proficient as Intel (or Nvidia, for that matter). By not imediately extending their X3D line up to their lower end products (sub 220$) in AM5, after its success previously in AM4's 5800X3D, they missed a huge oportunity. For example, for the 6-cores Ryzen (so, like 7500X3D and 7600X3D, 9500X3D and 9600X3D etc), as that's the segment where the bulk of gamming communities focus on (so, much bigger sales numbers). This is where the X3D aproach would make all the sense to be, too. You don't "have to". But you should to get the most out of them. I'll agree that needing extra software, like Process Lasso (or others like it, so specialized that it really works) isn't ideal. Maybe Intel should provide a very basic version of it, with some very simple guidelines for most regular users. My take on this is, perhaps, a bit different than most, as I personally don't mind messing with this stuff for myself (and it's just done once, after all). But yeah.... I'm not seeing a computer illiterate person going around this, the way I do. If I only used my PC for gaming, perhaps I'd have got an AMD X3D processor. But since I require more than a mere gaming platform from my computer, a processor with the E-Cores actually became usefull, almost like a "second processor" assisting the "main processor" (the P-Cores). I can do all my stuff (work or hobby) in any other non-gaming apps, by having the added power and core-count of the E-Cores to the P-Cores (so using them all for it). Or.... I can also use the E-Cores for all the extra background apps stuff (and exclude all that from P-Cores), so that it never interferes with the games (which I set on the P-Cores only). So, the best of both worlds, all depending on situation. And all a matter of setting such rules once, from them on automatized. It's separating tasks or not (in the P-Cores/E-Cores environment) depending on the purpose and, effectively, getting the most out of the system almost ideally, in my opinion. But that's not something that the system can really guess or decide for myself, and why I like it this way (or don't mind having it this way). Anyways, sorry all for the off-topic, but this subject is interesting to me and can easily get carried away.
  23. That's because you haven't direct experience with Intel for ages (4790K? ...that launched in Q2 2014, over a decade ago!). Maybe if you'd built and tested systems recently with both Intel and AMD, side by side, you'd see more clear differences, and where you're talking nonsense. Their (Intel's) IPC didn't improve? The jump in IPC was friggin monstrous when Alder Lake got out in late 2021 (not 10, not 7 or 8 years ago), imediately rendering everything (including AMD AM4) obsolete. The 5800X3D chip was really the saving grace of AMD. After messing with both Intel and AMD and, if it wasn't this degradation BS, I'd honestly consider Intel "K" chips from 12th, 13th and 14th gen to be the better chips today. Better than even the very latest AMD equivalents (yes, including the just released and, it seems, underwhelmingly so, 9000 series). Far better memory compatibility (don't even get me with the mem speed with 4 sticks of RAM on AM5), far better behaviour between different motherboard manufacturers models (much easier to predict when building a new system), far less fuss, headaches (AM4's USB issues) or "fafo" once you've dealt with the outrageous (yes they all are) stock BIOS settings. All the E-Cores that are ignorantly critized, which are easily dealt with Process Lasso, and then become a benefit for gaming (as I described above). The list goes on once you start handling both old and newer apps and games. Power hogs (yes, latest ones are), space heaters (yes, latest ones are), call it whatever you want... they've been, performance and stability wise, the better "total package", IMHO. Until this degradation crap started. Yes, the AM4 5800X3D and AM5 7800X3D are better gaming chips (though not always the best chip there), but they're a one-trick pony that is nothing special for anything other than gaming. And, sure, I expect the 9800X3D to be the next success. But the rest of this 9000 series? Honestly, doesn't really look all that good to me.... AMD is "winning" huge market share this very day, not because they're really better, but because their competitor did a ridiculous own goal... (nice one Intel!!).
  24. I'll give credit to AMD for placing effort on something that has "gaming usage win" plastered all over it, which is the 3D V-Cache of the AM4 5X00X3D and AM5 7XX0X3D chips. A really phenomenal idea to do it on chips with single CCD. (the only really good ones for gaming, as the latency issues between CCDs are inherent by design). I've had inumerous Intel and AMD processors during the last 25 years (overclocked most of them, I think) and, can surely tell, there were times when Intel was really dragging their feet (again, credit to AMD for spicing things up and make them work). And Intel surely deserves the current public backlash with this degradation crap. But then to call modern Intel processors "10 year old turds" is right on the verge of blatant ignorance. I had a 10700K before the current 12700K I own. That older one was no slouch, but the jump in performance with Alder Lake was huge. Depending on application, ~60% in MT and ~40% in ST. Even more once tuned and with a mild overclock (another extra ~7% performance, for free). That was launched in late 2021 and, in fact, an all-new design. People can trash talk all they want about E-Cores being useless for gaming but, for me, they're godsend. I can put every little extra app running in the background on the E-Cores (Discord, HWINFO, peripherals software, etc, etc, even the AV!) while I'm gaming with all the 8 P-Cores unaffected by those programs and games set on them (Process Lasso FTW!). Smooth as butter, stutter free, ZERO (AM)Dip in all my games.
  25. Dude, sometimes I read these threads with such issues, I just feel like jumping to that other side of the screen and "LET ME MESS WITH THAT COMPUTER!" LOL It pains me to watch people paying for a new system, and then having it far slower than it should be. I know it's your PC and your call but... an i7 14700KF at 4.3Ghz is not "ok" at all, IMO. I'm pretty certain at this point that you can not adjust the CPU Core Ratio (B760 mobo....). But, IIRC, you've disabled the Intel Turbo Boost and Boost Technology rows on the BIOS, right? (as a way around to lock the cores - which it did, but then castrated it). The black screen and freezes are due to the inability of the motherboard to feed the CPU (when boosting) - the B760 can't handle that i7 14700K when going full tilt. That's why it now works "ok" with boosts disabled (less effort for the motherboard VRM), at the cost of a very(!) sedated 14700KF. How about getting those enabled again, and play around with the voltage offset (to bring the CPU Core Voltage down) ?
×
×
  • Create New...