-
Posts
964 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by Aapje
-
He found an American that would receive them, and then sent them on to Georgia. As for the other comment that only people in Russia are at risk, we've had assassinations in London, Berlin and Vienna, so if they care enough, you are not safe in the West.
-
AMD Ryzen 9 7900X3D vs AMD Ryzen 7 9800X3D
Aapje replied to MrReynolds's topic in PC Hardware and Related Software
You need to keep in mind that the AMD-chips with more than 8 cores have two compute-chiplets, with a high latency between them. Because of the high latency, you generally want to run the game on just one die. This is even more true when dealing with 3D-cache, which is only on one chiplet of the two. You then want to run the game only on the chiplet with X3D. The 9800X3D has one 8-core chiplet with X3D. The 7900X3D has a 6-core chiplet with X3D, plus another 6 core chiplet that is generally not much use in gaming. So from a gaming perspective, you can argue that it actually has fewer cores. Furthermore, the 9800X3D can boost higher due to them moving the X3D-cache below the compute, which means that it suffers less from the heat, and the CPU can thus clock higher. So my answer is the 9800X3D, if it's just for gaming. -
Indie games are probably not any harder to produce, since devs can now use UE and Unity, to keep up with the increased demands. A single guy made VTOL VR, a solid arcade flight sim. And there is plenty of passion, because there are a lot of people who make games with very little chance and expectation to earn a good income out of it. Even one of the candidates of game of the year, Balatro, was made by a person who only made it for himself and had no expectations that others would like it.
-
Except the haves will be in trouble too, because games are very expensive to make, so unless the games switch to a whale model where they expect the few haves to pay way more, the game companies will still have to appeal to the have nots. So then the games will be held back by whatever they can afford. Arguably the games industry is now using temporal upscaling and frame gen as a crutch to try to keep the people with lower end hardware buying the newer games.
-
Often these sites reserve the card when people enter the transaction, so then the only thing that is needed is that 12 people start the procedure, or however many cards were available online. This also means that the cards can come back in stock if people back out of the transaction.
-
I've addressed the 3080. The GA102 chip is smaller than GB202 and was made on a far cheaper process. There is also no reason for them to sell the GB202 for so 'little' when they can sell it for much more. The 2080 Ti was made on the last TSMC process (12 Nm) before prices started going up enormously: Then Nvidia resisted the costs of TSMC 7 Nm by using Samsung for the 30-series. But Samsung's process was too poor, so for the 40-series they went back to TSMC, using 5 Nm. But with those prices, they are never going to give you 600-800 squared mm of die for $1k. You can complain about them being greedy, but they obviously not going to lower their profits to not pass those increased TSMC prices onto us.
-
The situation is not the same, because back then they announced a nearly full AD103-based 4080 and also a very much cut down 4080 based on the same chip. This made absolutely no sense, as normally a cut down version of a chip is sold as a lower tier, which in this case would be the 4070 Ti, which it of course became after Nvidia unannounced that cut down 4080. But the 5080 does seem to be almost the full chip. The AD103 and GB203 chips are almost identical in die size, and the CUDA core count is very similar, at 10,240 for the 4080 Super and 10,752 for the 5080. Given that the process node is almost the same, I don't see how the GB203 chip could have a significantly larger number of cores that are disabled. So I'm pretty sure that Nvidia doesn't have the option to unlock a whole lot more power from the GB203. The chip simply is not all that much faster compared to the generation before it. The only thing that Nvidia could have done, once they finished the design of the chips, is to put a cut down GB202-chip in the 5080, just like the 3080 had a cut down GA102. However, they are never ever going to do that, for a bunch of reasons, like the GB203 being way more expensive to produce than the GA102. I don't even see them selling a cut down GB202 as a 4080 Ti, because they can just sell the partially broken chips as 4090D cards to china. I think that we simply have to accept that this is it, for now. I do wonder what Nvidia is going to do if the 5080 is going to sell extremely poorly, which I think it will. I don't think that they even have the option to release a Super with significantly more performance, similar to how the 4080 Super was almost identical in speed to the 4080. But I also doubt that they will be willing to do a price cut for a 5080 Super, after Jensen already had to eat crow by doing a significant price cut for the 4080. Perhaps they'll just accept really poor sales for this tier.
-
Time to Roll Back to playable version?
Aapje replied to AvgWhiteGuy's topic in Game Performance Bugs
@Panzerlang lives in Japan, so he is probably your best bet when it comes to advice on your local PC market. But your price indication seems very high. Here I specced a high-end system without GPU for $1400: https://pcpartpicker.com/list/Fdpdxg Even with local taxes and a $1000 4080/5080, I'm still a long way off from $4200. And if you just want to run this game on a flat screen at 1440p, then the system I specced is overkill and so is a 4080/5080, and you can save a decent chunk of cash by going for a cheaper CPU and GPU (both of which are upgradable, and if you pick a smart moment to do so, you can probably gain a large boost in the future for a relatively low price). -
It's pretty clear that Deepseek has lied about it's training costs and there has been a lot of misinformation about how easy it is to run the model. If you want to run the R1 model yourself, then the 32 GB of the 5090 is not going to cut it, if you want to get any serious speed out of it. There are some guys who have made a cut down version of R1 that runs on 20 GB cards, but even then it runs way slower than if you run it on the commercial H100 systems. Of course, it may be true that even at these much slower speeds, very many people will want to run it on desktop GPUs, but I have my doubts.
-
Do keep in mind that there are already reports of PCIe 5 issues with the FE models. Their design effectively builds a riser cable into the card, which is known to cause issues, so this may be another case of Nvidia rushing new tech without testing it properly (like with the power connectors). Then again, the performance loss of switching to PCIe 4 is minimal. But of course you are free to do what you want.
-
Unfortunately, I found only two people who did somewhat serious VR reviews, Maraksot78 and Babeltechreviews, but the guy who did those reviews on that latter website sold the site and retired. So no more VR reviews over there. And neither did DCS reviews anyway. So I think that the best bet is to wait for the reviews on this forum. Since you have a 4080 Super, I would suggest not rushing into things. Even in the worst case, where you cannot get your hands on one (or only for an absurd price), you still have a very strong card.
-
You claimed that there is no benefit to having a faster GPU once you hit the max frame rate of the monitor. I have shown that this is not necessarily true, and there can be a benefit to having a faster GPU in that situation. Moving the goal posts to a different claim, that I never addressed in the first place, is pointless.
-
You were the one who claimed that your suggestions would work, without ever putting in those caveats that you are now suddenly introducing. If you had added those in the first place, I would not have objected how I did and it is unfair for you to act as if you argued those things in the first place.
-
If you compare the gains from the previous generation to those of this gen, then you should probably skip even more than one generation to get the same gains, that you got in the past by skipping one gen. I think that in hindsight, this will be seen as one of the worst, if not the worst generation to upgrade to (other than the 30-series cards at the mining boom prices). There is a decent chance that next years refresh will be much better, especially if they then put 3 GB modules on the cards. Even if they don't, it is hard to imagine that the 5000 Super cards will have a smaller price/performance gain than these cards. And the 3 GB modules will surely be on the 60-series, and that gen should use a truly new node, so the performance improvement of that gen is surely going to be better.. The only redeeming feature of this generation that I see is the small price drop for the 5070 and 5070 Ti. Otherwise there is very little to recommend this generation: The price/performance improvement is tiny No VRAM increase other than the 5090 The new multi-frame generation is the weakest of any of the new features that they've released in years past A tiny efficiency improvement
-
Seriously? You talked about this as well, so then you somehow weren't concerned about this being off topic. Yet once you are shown to be wrong, you suddenly discovered that this is offtopic and you sadly can't respond to my comment. How convenient. Also, you are again making a false claim of being a victim of a personal attack, even though I merely said that you have an inaccurate understanding, which is not a personal attack at all. If one is not allowed to say that someone else is wrong, then a friendly argument becomes impossible. You might want to read up on what a personal attack actually is, and that does not include disagreement with someone's beliefs: https://en.wikipedia.org/wiki/Ad_hominem
-
This statement betrays that you have a very simplistic mental model of the situation, and do not in fact 'understand plenty.' I'll give you one example to show why you are wrong. The key to this is to understand that thinking in FPS is deceptive and the actual time to render the frame is more important. If the GPU takes the entire 1/120th of a second to render a frame and can thus barely produce 120 FPS, then this means that the world state (which includes, but is not limited to your controller input) that the rendered image is based on, is going to be at least 1/120th of a second old. But if the GPU takes 1/240th of a second to render a frame, then the GPU can render two frames for every frame that is shown on the screen. Obviously both cannot be shown to the user, since the monitor is not fast enough for that. So what happens is that the oldest frame is dropped and the newer frame is shown. So now the shown frame is based on world state that is at least 1/240th of a second old, which means that you will have lower latency (up to twice as low, although that also depends on the CPU), compared to the situation with the slower GPU. Of course, this is rather wasteful, since you are rendering frames that are not used. This is actually what Nvidia Reflex (1) is made for. It delays the moment that the CPU processes the latest world state and asks the GPU to render a frame, until the last possible moment. So if it only takes the GPU 1/240th of a second to render a frame, but you have the Nvidia Reflex framerate limit at 120 FPS, then the game is asked to render the world state at such a moment that there is only 1/240th of a second left before the monitor refreshes. Of course, DCS does not support Reflex, which requires the game to cooperate, but the more wasteful method does work for DCS to lower the delay between what happens in the world state, and what is shown on the screen. So there is a real benefit, although that doesn't mean that it is necessarily worth the downsides.
-
You are ignoring his FPS lows & averages and just looking at his FPS highs, which is what the capping is based on. Also, there is actually a benefit to rendering more frames than the monitor can display: https://www.quora.com/Why-does-higher-FPS-than-my-monitor-can-support-make-games-smoother Not saying that it is necessarily worth the money, but you might want to learn a bit more about how the rendering pipelines and such work, so you understand that your simplistic model of 'card FPS should match monitor FPS' is not necessarily correct.
-
Yes, and that supports my point. You act as if it is trivial to just create good software and makes lots of money, while the actual reality is that it is an art and just because a company can make hardware, doesn't mean that they can make great software. How much money did Sony lose on Concord again? And look at Pico to see what happens if your hardware is not sold enough. They have a store with very few games. Again, your entire narrative is based on assumptions, and wishful thinking, with zero recognition that your assumptions can be wrong and that you keep ignoring all the counterevidence.
-
Yeah, this is the way if you want to game for cheap. You can use older hardware, and buy games with huge discounts. And in many genres, the graphics improvements are not as much as they used to be, so you don't necessarily miss all that much. However, there are a bunch of caveats, like VR being hard to do on older hardware, some multiplayer titles having dead servers if they are old, etc.
-
And here's a picture of an ED developer, collecting information:
- 1 reply
-
- 8
-
-
-
That is actually precisely what Meta is doing with Meta Horizon. Let me check how much they made with it... Billions! That's wonderful. @Dragon1-1 is a genius! Wait, what do you say, billions in losses? Nevermind then. PS. The actual money is in running a store and then skimming off 30% or so of each sale. However, that requires a popular product in the first place, not just in sales, but also in use. Even with a fairly high quality product, Meta has a pretty big retention problem, where a lot of headsets get little use.