MAXsenna Posted October 28 Posted October 28 54 minutes ago, Dragon1-1 said: Like Trondheim, which registered 30 degree heat last summer? More like Spitsbergen during summer. You can dig though. Did see a guy in Kentucky or something, dug up a small field, a couple of metres down, and he's all set. 57 minutes ago, Dragon1-1 said: IMO, efficiency, both in GPUs and in software, is the way to go. Totally agree!
Zebra1-1 Posted October 28 Posted October 28 (edited) 21 hours ago, zerO_crash said: What is enough and isn't is very individual. I'm currently waiting for Bigscreen Beyond 2e, and meanwhile am using the Meta Quest 3. In order to have any resolution similar to 2D monitor, I have to render the resolution 1.7x up. Even if I'd stay with the native resolution, and still keep many settings on medium or close to (view range, clouds, shadows, mirror resolution, +++), I still get the ASW to jump down to 45 fps. In particular, Ka-50 BS3 cockpit is a monster on the fps (vanilla high res textures), which brings the 5090 to its knees. We are very far away from "enough" tbh. Yikes. Well that makes me even happier about not upgrading to the 5090 from my 3090. I have a Reverb G2 which is supposed to be comparable to the Q3 and it too is nowhere close to the clarity of my monitor. But again, that would be lenses. I'm waiting for the Steam Frame to come out and will probably be upgrading to that headset. I don't even think the Pimax Crystal has monitor clarity though and that's currently the best headset on the market in terms of clarity for Simmers. I just don't think the tech is there yet in terms of what you're looking. Like others have said, even with multi-gpu support, you're going to run into a CPU bottleneck. And then there's the DCS core which is limited until we get Vulkan and that gets optimized... I just hope Vulkan doesn't take another year. I know there is a lot to it and the Dev teams have been busy and good things take time.....but this has been planned for release since 2021ish. https://www.digitalcombatsimulator.com/en/newsletters/newsletter08012021-txeuwna3q8uxe2dqmg6xelf4w2tfy3ek.html I know it will like break RB modules (which I have 3 of), but that is inevitable at this point and may as well let it be. Edited October 28 by Zebra1-1
zerO_crash Posted October 28 Author Posted October 28 (edited) 4 hours ago, Zebra1-1 said: Yikes. Well that makes me even happier about not upgrading to the 5090 from my 3090. I have a Reverb G2 which is supposed to be comparable to the Q3 and it too is nowhere close to the clarity of my monitor. But again, that would be lenses. I'm waiting for the Steam Frame to come out and will probably be upgrading to that headset. I don't even think the Pimax Crystal has monitor clarity though and that's currently the best headset on the market in terms of clarity for Simmers. I just don't think the tech is there yet in terms of what you're looking. Like others have said, even with multi-gpu support, you're going to run into a CPU bottleneck. And then there's the DCS core which is limited until we get Vulkan and that gets optimized... I just hope Vulkan doesn't take another year. I know there is a lot to it and the Dev teams have been busy and good things take time.....but this has been planned for release since 2021ish. https://www.digitalcombatsimulator.com/en/newsletters/newsletter08012021-txeuwna3q8uxe2dqmg6xelf4w2tfy3ek.html I know it will like break RB modules (which I have 3 of), but that is inevitable at this point and may as well let it be. Much could be written here, but remember that writing software is in general both a time- and planning- intensive endavour. It occours to me that those who claim "Currently the CPU is the main bottleneck (you run into both as bottlenecks, even with top-end hardware), and for GPU performance, ED should focus on improving their software altogether." have never really served in a administrative capacity. If ED focuses on CPU- improvements alone, then when the GPU solely becomes the main problem, it will take years to start any meaningful work on it. Those developments have to progress parallel, and there is no point in waiting too long with certain decisions either, as those drive further R&D and its schedule. Many projects that ED releases now, are actually going far with regards to work began, and even further when it comes to planning. I haven't even touched upon the subject of what the competition is doing (I imagine that what is implemented in DCS, is likely a easy transfer to MCS), and rest assured, this is a central subject in this niche. As to the notion of where the improvements lie, ED knows that best. It is wrong for certain users to come and talk about improvements to the engine, and stipulate performance gains, when they've never opened and looked at the compiler themselves. At best, we know that with perfectly balanced settings, you can utilize both the CPU/GPU to maximum. How efficient those processes are, however, is only for a programmer to deduct. You cannot even compare DCS in this department to anything else on the market, because frankly, the level of simulation surpasses anything commercial by at least an order of magnitude, if not more. Whereas a helicopter blade will "MAYBE" be simulated individually in other "wannabe- simulators", here, we have individual sections of each blade simulated in real-time as you fly. All those hidden-to-the-user features cost immense calculation power. Hence, with everything happening beneath the ground, DCS is really well optimized for what it is - we really don't give it credit enough for it. While I am not a programmer myself, I'll tell you that the way it works, is that a programmer, using his/hers competence gives the company a couple of options with regards to solving an issue, and the decision is typically made at the administrative level, because this is a question of time invested, money spent and return on the investment. Not going too deep on this, but if e.g. further optimizing of DCS will cost 80% time spent for a 10% gain in performance, then what I present here, from the research paper to judge, is the exact opposite - 10% time spent for implementation, and massive, massive potential gains in performance. All at the behest of the final user. That's also why I'm focusing on the information regarding gains and positives/negatiives of the tech, but not so much the argument of this vs. optimization, because we simply don't know. Nobody outside the company knows. One final quick note; while I don't remember who (ED official) wrote/said it, or when (relatively recently) or where, it has been stated that the implementation of Vulkan will actually not yield as huge gains in performance, as many hope it will be. It's worth considering this statement. EDIT: Corrections. Edited October 28 by zerO_crash [sIGPIC][/sIGPIC]
Zebra1-1 Posted October 28 Posted October 28 25 minutes ago, zerO_crash said: Much could be written here, but remember that writing software is in general both a time- and planning- intensive endavour. It occours to me that those who claim "Currently the CPU is the main bottleneck (you run into both as bottlenecks, even with top-end hardware), and for GPU performance, ED should focus on improving their software altogether." have never really served in a administrative capacity. If ED focuses on CPU- improvements alone, then when the GPU solely becomes the main problem, it will take years to start any meaningful work on it. Those developments have to progress parallel, and there is no point in waiting too long with certain decisions either, as those drive further R&D and its schedule. Many projects that ED releases now, are actually going far with regards to work began, and even further when it comes to planning. I haven't even touched upon the subject of what the competition is doing (I imagine that what is implemented in DCS, is likely a easy transfer to MCS), and rest assured, this is a central subject in this niche. As to the notion of where the improvements lie, ED knows that best. It is wrong for certain users to come and talk about improvements to the engine, and stipulate performance gains, when they've never opened and looked at the compiler themselves. At best, we know that with perfectly balanced settings, you can utilize both the CPU/GPU to maximum. How efficient those processes are, however, is only for a programmer to deduct. You cannot even compare DCS in this department to anything else on the market, because frankly, the level of simulation surpasses anything commercial by at least an order of magnitude, if not more. Whereas a helicopter blade will "MAYBE" be simulated individually in other "wannabe- simulators", here, we have individual sections of each blade simulated in real-time as you fly. All those hidden-to-the-user features cost immense calculation power. Hence, with everything happening beneath the ground, DCS is really well optimized for what it is - we really don't give it credit enough for it. While I am not a programmer myself, I'll tell you that the way it works, is that a programmer, using his/hers competence gives the company a couple of options with regards to solving an issue, and the decision is typically made at the administrative level, because this is a question of time invested, money spent and return on the investment. Not going too deep on this, but if e.g. further optimizing of DCS will cost 80% time spent for a 10% gain in performance, then what I present here, from the research paper to judge, is the exact opposite - 10% time spent for implementation, and massive, massive potential gains in performance. All at the behest of the final user. That's also why I'm focusing on the information regarding gains and positives/negatiives of the tech, but not so much the argument of this vs. optimization, because we simply don't know. Nobody outside the company knows. One final quick note; while I don't remember who (ED official) wrote/said it, or when (relatively recently) or where, it has been stated that the implementation of Vulkan will actually not yield as huge gains in performance, as many hope it will be. It's worth considering this statement. EDIT: Corrections. Totally agreed. Yes initial implementation won't yield any performance improvements, that will happen over time. What I was more implying was that even if you could run 2 or 3 or whatever GPUs, you're going to hit bottlenecks elsewhere. Whether it be hardware or software. At some point it becomes worthless throwing money at an issue. An RTX 6000 PRO or Multi GPUs would be that point IMO. Right now I'm just glad Samsung has joined the VR space as it should help development and competition between headset manufacturers. I feel like this has been the biggest issue with VR since HP discontinued the Reverb. It's really just the Quest or expensive Pimax now. Yes the Index is still around (and a good headset), but I don't want to deal with base stations.
zerO_crash Posted October 28 Author Posted October 28 3 minutes ago, Zebra1-1 said: Totally agreed. Yes initial implementation won't yield any performance improvements, that will happen over time. What I was more implying was that even if you could run 2 or 3 or whatever GPUs, you're going to hit bottlenecks elsewhere. Whether it be hardware or software. At some point it becomes worthless throwing money at an issue. An RTX 6000 PRO or Multi GPUs would be that point IMO. Right now I'm just glad Samsung has joined the VR space as it should help development and competition between headset manufacturers. I feel like this has been the biggest issue with VR since HP discontinued the Reverb. It's really just the Quest or expensive Pimax now. Yes the Index is still around (and a good headset), but I don't want to deal with base stations. True as well [sIGPIC][/sIGPIC]
quantum97 Posted October 29 Posted October 29 I checked the Vulkan documentation, and it’s technically possible: https://docs.vulkan.org/guide/latest/extensions/device_groups.html There are to options Alternate Frame Rendering (AFR): One graphics processing unit (GPU) computes all the odd video frames, the other renders the even frames. (i.e. time division) Split Frame Rendering (SFR): One GPU renders the top half of each video frame, the other does the bottom. (i.e. plane division) This is a problem because with SLI, the driver automatically handles work distribution, so code written for a single GPU often works without changes. Without SLI, you need to create separate command buffers for each GPU, manage data flow and synchronization, and combine the results (e.g., left/right part of the screen). Which would require a lot of work. Regarding the CPU bottleneck, we are not limited by technology but by implementation. There are already processors, such as AMD Epyc, that can, for example, compile a program in 5 seconds, whereas our processors running DCS would take 5 minutes. The key is just the efficient use of the cores of these processors and an Epyc can have up to 128 cores. Presumably, if you ran DCS on such Epyc, it would use maybe 8 to 16 cores, but it would perform much worse because these processors are designed to execute computations simultaneously across almost all cores, yet they have lower clock speeds, around 3 GHz. Everything is doable the question is just how much time and money you are willing to spend on it. For example, would you buy two RTX 5090s just for DCS, or from ED perspective, is it worth implementing multi-GPU support if maybe only 0.5% of players would use it? Same for MT that uses 128 cores. 1 Nvidia RTX 3060, Intel® i3-12100F 3.30GHz, 4.30GHz Turbo, NF-A14x25G2 PWM, RAM 16GB DDR4, Gigabyte B660M DS3H
scommander2 Posted October 29 Posted October 29 1 hour ago, quantum97 said: This is a problem because with SLI, the driver automatically handles work distribution, so code written for a single GPU often works without changes. Yup! SLI bypasses Vulkan API GPU group handling. We need the API layer to handle tasks 1 Spoiler Dell XPS 9730, i9-13900H, DDR5 64GB, Discrete GPU: NVIDIA GeForce RTX 4080, 1+2TB M.2 SSD | Thrustmaster Warthog HOTAS + TPR | TKIR5/TrackClipPro | Total Controls Multi-Function Button Box | Dell 32 4K UHD Gaming Monitor G3223Q | Win 11 Pro
Recommended Posts