-
Posts
2675 -
Joined
-
Last visited
-
Days Won
9
About Tippis
- Birthday 01/01/1870
Recent Profile Visitors
11193 profile views
-
The only thing that might not be immediately available is PPI, but the number of HMDs is low enough that it should be trivial to keep track of that as a database, or at least get a suitable ballpark figure. It's not like with monitors, where there combinations of size-resolution-distance are pretty much infinitely variable, even when sizes only really ever come in a couple of standard flavours.
-
It's the other way around that is the problem, and where the normalisation needs to happen. It is unacceptable if a target is shown at different sizes on different displays — the goal with the dots is to eliminate those cases, and to make sure the transition from “smallest possible” to “invisible” happens the same on both.
-
It's what they're trying to do, but because of that very reason, “a pixel” can no longer be just a pixel. It needs to be a normalised dot size that may be anywhere from 1px to somewhere in the region of 3×3 pixels ± aliasing. That the fundamental cause behind the “huge black blobs” complaint (which was never actually any of those three things): what suddenly looked huge on one display was how it always looked on another. VR, of course, was a something of a problem since its resolution was treated a bit naively, assuming that pixel sizes were roughly the same as the equivalent pancake display and accidentally forgetting how much closer those pixels were to the user's eyeball. But ultimately, that's just another scaling factor to be dialled in to determine how large that normalised dot should be. It's really no different than any other display other than that it has to operate on different assumptions of how large the optical dot will actually turn out to be.
-
But this isn't real life. We are not using real eyes. We are using simulated eyes with capabilities that should show up equally on all hardware given the same simulated situation. The counter-point is basically that something as irrelevant and arbitrary as hardware should not dictate that difference. Something in the simulation should, if it's there at all otherwise the simulation has fundamentally failed. And if it's there, then it would need to be selectable and enforeable in the client so that you, as the player, choose to have good or bad vision, same as they can choose to fly a good or a bad plane. Or the mission-maker dictates one or the other to apply equally to everyone. And then we end up with exactly what we're having now: something that tries to equalise the perception across all hardware (spotting dots), with the option to also have something that is completely player-customisable (labels). And of those, labels are a luxury, whereas spotting dots must exist and need to remove hardware differences to the greatest degree possible. Sure, ultimately there will be differences because of external factors, but the simulation should do its utmost to nullify that difference.
-
It's not really a question — what would happen is well-established and known. And no. That would contravene the entire purpose of having 3D models to begin with, much less having different levels of detail on them. The reason you don't remember this is because what you're describing never happened. That would be a pointless, backwards, and pretty darn stupid solution since — aside from being dots — the two are nothing alike and it would in fact make every problem you're complaining about even worse. Not only would you be able to see them at much longer ranges, the visibility would be entirely a matter of resolution where higher resolutions make them more difficult to spot, they'd be trivially easy to manipulate to give yourself even more unfair advantages, and on top of that there's the fact that one is a UI element rendered on top of the world, whereas the other is a world object and part of the simulation. Every bad thing imaginable at once. If you believe dot labels are a better solution, just turn them on. Problem solved. Or, well… problem caused, really since the outcome is the exact opposite of what you want (assuming you do not dearly desire having an unfair and unrealistic advantage).
-
And that's the whole issue: it does. And the distances are too long. And it creates P2W. It sounds like they're trying to improve how the system works in VR.
-
What's wrong is the assumption that they'd vanish from view “like they should”.
-
Our current one. I have only shown it about a bajillion times, but fine, I'll demonstrate it again. If I zoom in to a 20° FoV, the frustrum covers 349.1 mils. On my display, that angle is rendered using 3440 pixels. Each pixel thus covers 0.1 mils. For the wingspan of an F-16 (just under 10m) to be less than 0.1 mils and therefore only cover one pixel, that plane would have to be 100km (54nm) out. That is quite a lot more than “a few miles” and an order of magnitude beyond what most semi-qualified guesstimates suggest should be reasonable. It's even more than what the old spotting dot system allowed for, and that was already ridiculous. And that's on my modest hardware. For others, it could be even farther. Or much much shorter. All three cases are bad because none of them are the same. Even if by pure accident, someone gets a proper and sane spotting distance limit on their system, that limit only applies to them and their accidental realism puts them at a severe disadvantage and others throwing money at the problem would be at a severe advantage. No. Unified spotting dots show up for less than half of that. If dot labels bother you, redefine them or turn them off. The spotting dots also need to be dialled back — I have never stated otherwise — but they are massively better than relying on the unlimited range that pure trigonometry and the 3D model would inherently provide. We can also discuss what the proper size for the dot should be, but then we immediately have to figure out what should be the benchmark — the lowest common denominator — that can be reliably and equitably the target for all displays irrespective of their resolutions. It is quite obvious that the VR one is bonkers, but that doesn't really help us answer the question of which one isn't? Which one should be picked as the standard? If you want to argue that we should do away with zoom, then I wish you the best of luck on that debate. If you want to argue that I can't see that single pixel anyway on my display then, sure. With my current setup you're actually correct. There's just one problem: I can move the display closer, or just lean in, and the problem immediately comes back. And if you want to argue that, screw all that trigonometry noise — just put a rendering cap in, then fine. That would indeed work for limiting how far out you can see other planes. But realise that this means they will pop in at, oh, let's say 5nm and be about 1 mil, or in my case 10 pixels wide. It won't exactly be subtle. I'm talking about what would happen if we didn't use a dot system to put a hard cap on how far out a contact would be rendered on modern hardware. You've never seen it because it's not done that way, and for very good reasons. You didn't have to worry about it in the olden days because the hardware couldn't display it anyway, and in many cases there was a hard cap on rendering contacts, and the same hardware limitations made sure you couldn't see the pop-in. Much. Would you prefer it were three times that number? Because if we remove the cap and the dot, and only rely on the model, that's what you'll get. Well, not in VR, obviously, but that is sort of the problem. You wouldn't get it but others would. You would have to rely on radar, AWACS and RWR, but others wouldn't.
-
That's because the current spotting system ensures that it doesn't happen. If you just let the 3D model do its thing, like you suggest, there would be no limit to how far out aircraft could or would be rendered, so the range at which you'd be able to spot those planes would be completely disconnected from any sense or semblance of realism. Iwould only be your graphics setup rather than any part of the simulation that said that, actually, planes can be seen just fine from across the entire map. Or worse, you wouldn't be able to spot them at those ranges, but they will be able to spot you just fine. Essentially the problem we had with the old system, except your suggestion just changes the unequal dot for an unequal 3D model. The end result is the same, and we'd be right back where we started where the spotting system has to be revamped. This is a VR discussion, yes, but what is rendered in VR relates to and has to consider what is shown on monitors, or VR will end up with some serious disadvantages (or advantages, which is just as bad). And without dots, there is no way to control that point, equalise it across hardware, and make that single dot of an equitable visibility. We are already at the point where our displays can (and will) render details that the pilot should not be able to see. That needs to go. In addition, this ability differs with settings and hardware. That also needs to go. If realism is a core goal, what the hardware is able to generate is no longer a viable limit. We must introduce and enforce artificial ones. If equitability is a core goal, we must not let higher-resolution displays show targets before they can show up on lower-resolution ones. We must introduce and enforce artificial sizes. In both cases, we end up having to make sure a single dot is rendered at some given distance, in many cases long before it is actually 1px large on the screen, and we have to make sure that single dot fades into the background in a controllable way. What better way is there to display a single dot with its size and colour tied to strict in-game parameters, than to have a dot-based system that use in-game parameters to dictate its colour and size rather than some arbitrary out-of-game factors? Are you serious with this question?!
-
Which is why you need to put something in its place and why cannot allow the LoD to run free. If you do, you get the situation we had before where you saw aircraft at potentially infinite range, because that's what the rendering will do if you don't tell it otherwise. So. Back to the same question: if we don't use dots to hide the transition from a hard cap on rendering distance to where the 3D model can start to be rendered at what is an appropriate size for the distance, what else should we use? Should the 3D model just pop in? Should we add klaxons to try to divert the player's attention from the obvious jarring effect that appears on the screen?
-
Good thing that they're not doing that, then. In fact, the system you were so much in favour of was doing exactly that, and now you are upset that it's gone. But more to the point, the game should also not favour high-spec hardware. If you want P2W, go play Warthunder.
-
Have you tried reading the meany reasons given and responding to them? How do you make sure the 3D model is only visible at realistic ranges? How do you avoid pop-in? How do you make sure it is shown equitably across different display types? This is exactly the problem relying on the 3D model alone causes and that the dots solve.
-
They're also completely separate and largely irrelevant to the thing that dots are meant to represent in simulation. If you want to get rid of the dot system, you first need to come up with a better idea for how to represent what they're meant to show. Getting rid of dots without any replacement would have the exact opposite effect to what you're asking for. If you want the level of unrealism a dotless spotting system would create, go play Afterburner Climax.
-
That should worry you. The rest are talking about it — they just don't want to acknowledge the consequence of what they're asking for. Yup. Only way to equitable compensate for different resolutions. And it works. Success, until someone can figure out something better. Do you have any suggestions?
-
Good to hear. Yay for Bohr bugs.