Jump to content

Tippis

Members
  • Posts

    2797
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by Tippis

  1. How old are we speaking? Because the old method let you see things 50 miles away, and the old old method was only slightly better than that but not better than what we have now.
  2. There's a very good reason to have them larger, and it's the reason we have arrived at where we are. In fact, there are several. Some of this you'll know, obviously, but I like to be thorough. The first is to make them equitable and resolution-independent. If someone sees the target at a given size at a given distance, everyone should see the target at that size at that distance. If it goes beyond the point where it can no longer be represented as an entire single pixel, it needs to fade out and at some point, it just needs to go away, again for everyone. The problem here becomes one of lowest common denominator. Arguably, yes, at higher resolution “the smallest” thing should still be a single pixel that fades out over distance, but the problem is that this visibility needs to be replicated the same at the lower resolution. So we need to figure out a rock-solid way of representing a single high-res pixel as an equally visible low-res subpixel. On top of this, on the high-res end, as that single pixel approaches the limit of visibility, it also needs to go into subpixel territory, effectively fading it into the background… and then we still need to figure out a way to reliably translate that back to the lowres realm, where it now a sub-subpixel, double-faded into the background. The results will start to diverge very quickly, and that's no good. It is probably possible to experimentally figure out a good parametrisation for a fading curve based on resolution and range and maybe throw in some of that lighting and glinting and variable target size as well. That would be really neat, but it also risks being completely wasted effort. Because on top of that, there's the problem that we now have high-res displays that can show finer detail than the eye can resolve. It's a problem because this means that the single pixel on the high-res screen might actually represent a level of detail that our simulated pilot should not be able to see, and not just what the player can. In the other big thread on the topic, I have some calculations to show how shockingly quickly and easily we get to that point on modern systems. So realistically, when we are at the very edge of visibility, the target may actually still have to be rendered as larger than a single pixel on some displays. If we go into the single-pixel or sub-pixel domain, we are already breaking the simulation and offering unrealistic visibility. At those resolutions, the target needs to start fading into the background when it's still an entire block of pixels on the screen. So having this neat function that translates a high-res pixel into a low-res subpixel doesn't matter — the high-res screen should still show the target as a block, and that block is very likely to be about single-pixel sized on the low-res display. The translation only needs to happen at a point where neither display should be showing anything anyway. We still need to figure out what size high-res dot is equivalent to a lonely low-res pixel, but probably not the other way around. And the less said about what happens when we throw variable zoom into the mix, the better. A second/third/howeveryouwanttocountthem problem is one of physical setup and screen distance. This one is trickier to generalise, but is essentially why we have these big VR blocks. It is the out-of-game version of where the display can show too fine detail: how large a single or a blob of pixels are to me as a player will obviously vary not jut with the resolution but with the distance from the screen. If I completely rearrange my desk and move some of these darned sim peripherals out of the way, I can pixel-peep single pixels all day while barely looking (indeed, that's part of what I do all day). But if I move it back give more rooms to all the toys, the exact same screen is now “retina”, to use that loathsome advertising term — the individual pixels are below what my eye can resolve. VR goes in the opposite direction: since the screen is right up against the eye, the pixels are inherently huge… well, relative to the screen resolution at least, and from a perceptual standpoint. To combat this you add more resolution and/or supersampling (i.e. virtually more resolution) and get into the whole subpixel detail disscussion again. The question then becomes one of, should this be our reference point? That the goal is to make it 1px on a “normal” VR screen, and then we extrapolate the equivalent sizes in pancake mode from there? Or do we try to translate from some kind of normalised pancake target size into the VR realm and hope we get it right? Maybe that parametrisation function will come in handy after all…? What we certainly shouldn't do — and probably the reason why this thread exists — is to assume that a VR resolution is the same as a pancake resolution, and so we apply the same dot size on both. There's a pretty significant difference in having 2k vertical pixels an inch away from your eye, and having it 40 inches away. And of course, then we could get into a whole technical debate about the feasibility of also adjusting for monitor distance, but omg, the headache of trying to figure out, not just how far away the player sits (without cheating) but also adjusting for “should I sit at 90cm or 75? You know what, 84.3 seems about perfect… on and then I adjust my zoom cruve”. And then (I've lose count now) there's the issue of what should happen at that very edge of visibility. And I'm talking about what the pilot is capable of seeing here, not rendering visibility. At some point, the target needs to start blending into the background. Much sooner than many would expect, but at the same time much later than some would suggest. And let me just be clear here: I'm not saying that the current dots are good at this — targets are still far too visible, far too far out — but it's a massive improvement over the old system and for that reason alone any notion of going back is laughable. But as mentioned, this may still be at a point where the target is — or at least could be, given its size on the screen — still rendered as a full 3D model. How do you fade out a 3D model in a good way? We don't want pop-in (or pop-out, which is arguably even more distracting). We want something that can be faded out easily, reliably, and equitably on all displays at all resolutions. Something that covers up the transition from no perceivable colour difference from the background to clearly visible, but unidentifiable blob, to fully rendered 3D model, and which can (at least in some kind of fantastical dream world) be colour-matched to how that 3D model will appear when it takes over. And what something will be is a dot – maybe even a dot that isn't just a single pixel. Preferably, yes, that dot should be coloured in a way to represent… well… colour, and aspect and size and and lighting and a hole bunch of other things, but that's unfortunately a luxury compared to the basic functionality of covering up the range segment where the target becomes visible, but doesn't suddenly pop in because its “minimum visible size” actually turns out to be a whole bunch of pixels. …and then, of course, there's another transition that needs to happen that dots most definitely aren't a solution for, but that's a separate discussion. This is just to illustrate that they are indeed needed as a solution, and that they may indeed have to be much larger than a single pixel. Oh, and that we shouldn't naively think that we can treat VR as if it were pancake, because duh. But everyone agrees on that last point. Oh, don't worry, they're not aimed at you. There is a very specific segment of wishy-washy posters who are adamantly against any improvement to the game, and especially to spotting, and especially especially when they realise that those spotting improvement would make them lose their artificial advantages. Once upon a time, they said that spotting should not be addressed because there was no problem — after all, they could see targets at 50nm and therefore, any complaint about the spotting system making targets hard to see was invalid and any and all improvements were unnecessary. Then they realised that others had a different advantage: that closer in, they could see the target much more easily because of how the spotting system interacted with lower resolution. Suddenly, the previously perfectly working system was broken beyond repair and needed to go. Then they realised that the new system didn't do away with the other guy's advantage — it just made it universal so everyone saw larger targets, and their old absurd-distance advantage had been removed. Now they shifted foot again and suddenly the old system had make a return, or better yet, make an even newer one that would return to them that advantage, but with unlimited range. All retention of their advantage and removal of the other guy's equal advantage in the name of “realism”, of course. There's a reason why I can only laugh at the utter lack of logical consistency and blatant desire to cheat emanating from this particular segment. You can actually articulate a rational argument, even if I don't fully agree with where it leads you.
  3. The thing is, the compensation method is what makes it realistic. Same with the radar, except it's not there to compensate for anything. It's its own system. Granted, to hear some speak, you'd almost expect them to suddenly want their radars to be granted supernatural powers because they bought extra hardware for that purpose. And if there was a server option to force anything, it would have to be to force the new system on rather than let players fall back on the nonsensical and unrealistic old system, or worse yet, one where there is no restriction on target spotting range at all. Ultimately, we'll all end up with the same system anyway so providing a server option is a bit of a waste.
  4. Just witness the upset when the old dots went away, and with them the absurd ranges at which planes could be spotted previously, causing people to demand that the old dots were retained as an option. Not to mention those who are now clamouring for the removal of dots entirely, to allow for infinite range, limited only by video resolution and the game's max simulation distance. Coincidentally, no, that's not a problem with old games, and as such can't have convinced players of anything. Dots are a pretty new entry since up until recently, this kind of long distance spotting wasn't a problem — the hardware wasn't there to make it one. Now that we have to put hard caps on how far out the hardware renders things, and to create a method for letting contacts fade in, spotting has arisen as pretty much the only viable solution to that.
  5. What is the scripting equivalent to SET COMMAND? As for the commands and devices themselves, as the manual states: It's a bit “red string on a cork board” to connect the definitions with each other, but it's all there if you want to read and write via the X trigger commands.
  6. Yeah, I think that's where the aggravation sets in: sure, higher res = more detail, but dots are dots. They should have no detail. They should just be seen or not (or some gradient inbetween where they fade into the background). And they should go away the same no matter what. The intuition for what better graphics should give you breaks down at that point. Once the transition to 3D model happens, we have a whole different ballgame. And there's also the tricky bit where settings may have to adjust where that happens so you don't get what happens now for many people, where you go from clear dot to indistinct 3D model mush. There is only one solution. Stop hardware progress. But seriously, yes, one of the big hurdles right now seems to be that so many solutions are showing up — specially in VR — to let the player dial in their perfect preferred balance between quality and performance, and almost all of those need some kind of special-case handling for spotting. I almost sympathise with ED for what a mess it must be to keep up and try to figure what will be stable and long-lasting enough a solution to warrant trying to support.
  7. As in, adjusting the plane's cockpit settings on the fly? No, it's only barely possible to read those kinds of settings, and even then, only for “Player” units where they're exposed (so no “Client” in a multi-aircraft or MP setting). I think the Huey might be a bit too new, but some of the properly ancient modules support the “Prepare mission” functionality where you can set up a plane and then have all those settings saved in a file that gets included in the mission. But that's obviously a static once-at-mission-start kind of thing, even if/when it works. You might be able to circumvent it a bit with various SET COMMAND actions, and I don't think there's a scripting equivalent to that, but it is also quite limited in what it can do and how well it can be controlled. See here for some further breadcrumbs:
  8. Quite. And that works in the other direction as well: spotting shouldn't be gimped to give a handicap (in the golf sense) to people who have more expensive hardware in the name of deliberate PvP unfairness. P2W schemes may plague other games, but it has absolutely no place here. The goal should always be that you can spot targets equally well at equal distances no matter any external factors. The pilot's physical limitations should be simulated the same as the plane's limitations — imagine the furore if my hardware choice let my missiles fly farther, track better, and be more resistant to countermeasures There are some naive assumptions with the new dots as far as size as a function of resolutions, and those need to be tweaked, but on the whole, that old business of making in-world limits vary with out-of-game settings was just nonsense and had to go. Just because some people are losing their precious artificial advantages under this new scheme is no reason to get rid of it. Quite the opposite. It's exactly why it should be kept and further evolved. Yes. It's really these transition points and the expected visibility when it happens that needs to drilled into to get both the dot fade and the 3D model LoD and scaling parameters adjusted to where it all meshes together as seamlessly as possible. They really can't. They can see you as a 1px dot at lower distances. But others can also see them as a 1px dot at much longer distances. The problem is that those two distances differ, where higher resolution sees farther, and that the physical size of that single pixel also differs, where lower resolutions makes it easier to see. The new dot system removes both of those issues. Rather than each side getting their own particular brand of nonsensical advantage, neither get any. Of course, people who want to be able to see farther than they should will be against their own advantage being removed but no-one cares about that nonsensical opinion.
  9. That is no justification for deliberately making it worse. Good news: no-one is arguing for that (except you), and the end goal is to get away from exactly that situation. Just because you would benefit from making the game more unfair doesn't mean the game should be made more unfair. Quite the opposite. Especially since the benefits you imagine yourself reaping would be used against you so you'd actually suffer from getting what you wish… You know, like what happened with the old dots?
  10. I.e. perception. A cognitive process. No. The better the device, the higher the resolution, the more detailed the image. Period. If it affects spotting — i.e. the pilot's vision — then it is is introducing a meta-game component into the game that shouldn't be there because you are letting hardware influence the simulated world. You might as well suggest that your weapon effectiveness should be FPS-dependent because higher FPS means it can show more fragments, and more fragments means more damage. But that is obviously nonsense. The damage should be the damage should be the damage — if your hardware is suddenly a factor, you have long since ceased to simulate the damage effect. You're letting an irrelevant and disconnected out-of-game variable affect the in-game world and how it is being simulated. Same with spotting. What you can see should be what you can see should be what you can see. There is a limit to how small a detail you can see. That limit should be simulated and graphics hardware should ultimately not be a factor. It might let you squeeze more pixels out of the same observable area, but the limit is the limit and it categorically must be the same for everyone. It can under no circumstances change it so that you can see things that others can't because (or not see something others can) just because you fiddle with some settings, because then we have ceased to simulate the perception of the pilot — in this case their vision — and instead let a wholly irrelevant out-of-game variable change the in-game behaviour of the world. If you can come up with any other solution that removes hardware as a factor — and it must be removed as a factor for the simulation to be correct and be realistic — that isn't relying on a normalised dot, then I'd be glad to hear it. But no-one has ever been able to figure one out, so good luck. It is the way to cheat, yes. That is why some are so in favour of it: because it lets them have the game show a different world to them than it does to others. That is no longer a simulation.
  11. No and no, in that order. No, it's not a failed idea because no, higher resolution is absolutely and categorically not supposed to give you better spotting. If it does, the spotting is fundamentally broken. Spotting should as far as possible be wholly hardware-agnostic and as well as it ever can yield the exact same result regardless of resolution and display system. If it doesn't do that, then it has truly failed. The only way to achieve this reliably in the edge-of-visibility realm is with a resolution-countering dot system. Now, as it happened, the previous dot system was so poorly implemented that it gave rise to that kind of failure: different contact sizes depending on hardware, even though its foundation was one where that could be solved. The new system attempts — and to some degree actually succeeds — to give everyone the same target size. It is just a bit… ehm… naive, let's call it, about what display systems are used and how that translates pixel size in to visual size. So yes, higher resolution should indeed see larger dots. No, that is not a failed idea. In fact, it is the only viable idea. No, that doesn't mean dots is inherently correct, but correct dots are more correct than any other solution can even hope for. We are not there yet, so yes, more tweaking is needed, but no, that doesn't mean we have to discard the whole idea. Indeed, it's just a matter of tweaking. And providing constructive feedback for those tweaks. Eg. at what distance should the dots be fully faded into the background. At what distance should the dots be overwritten by 3D models. On what display time should those cross-over points translate into what size dot. And how do we solve the middle-distance problem where we should have be able to identify aspect to a much higher degree? No. it's not just a solution, but a complementary one, and the best one available. The whole point and purpose of scaling is that, counter-intuitively, it makes it more realistic. Cognition is not pure trigonometry, as it turns out, and to make it realistic, it is cognition that needs to be simulated.
  12. Yes? That's exactly what this dot system is trying to achieve: a minimum-size dot that fades into the background at extreme ranges, and a 3D model that takes over at closer ones. It tries to be resolution-agnostic so that the size is the same across resolutions, which is why higher resolutions get larger dots (in terms of pixel count). Whether it achieves this goal is a slightly different matter, but that's why the feedback thread exist one forum over, and the part where it transitions from dot to 3D model is particularly… iffy. But that's a matter of implementation, not of the idea being wrong. The problem is that there needs to be a third state that bridges the gap between the two, but a very vocal contingent of forum posters have argued ferociously against ED ever implementing the known and working solution to this, and have unfortunately convinced ED to go along. Until they change their minds, the gap is likely to remain. The reasons you list are exactly why having a dot system is a necessity. Well, except for the last one which is objectively false. We have dots because it's the only way to reliably set a universal cap on how far out you can see aircraft. Making it a matter of pure trigonometry means they will show up at absurd ranges depending on your settings, while also making it trivial to exploit by players online. Dots being dots is just a tautology. They're not meant to portray aspects, so their inability to do so is somewhere between wholly irrelevant and proof that they're doing exactly what they should. At the ranges where you just see “a contact”, that is all the information you are meant to get. If they could portray aspect, they would have failed at the one thing they're meant to do. The system is vastly less exploitable than the old one. I suppose this is why some want the old one to return… The only commonality between spotting dots and dot labels is that, by default (but not by necessity), they are are dots. That is all. There is no duplication between the two in how they work and what can be done with them. ED can solve this if given useful feedback rather than foot-stomping and wishes to go back to a previous more exploitable state. But accept that dots is the way forward because there is literally no other solution to the problem that's being addressed. Also accept that they aren't, and indeed can't be, the solution to a different set of spotting problems that ED could also solve if people didn't get all up in arms over the game being made more realistic. It is a WIP feature that you can turn off. There is no “supposed” about it, and a “true off” would be an even worse solution than going back to the old flawed one. If you want it to progress further, provide constructive feedback. “Do not want” is not constructive.
  13. The point is that the second one isn't a dot and isn't subject to resolution normalisation, whereas the first one is and thus isn't subject to LOD — just range attenuation. This is why there is no single solution to the spotting problem, and a multitude of methods are needed to cover the different range bands, each with its own normalisation and compensation schemes. But more than that, the point is that, bit by bit, we're moving in the right direction.
  14. Precisely. Hence the quotation marks. If anything, you'd have to go through a lot of effort to detect that your standard file system calls are handled through some kind if links, and then deliberately put something to fail the call if this is the case. So it's not really a case of DCS supporting these kinds of neat redirects, but rather that if it didn't do so, it would pretty much be a case of ED wilfully sabotaging their own code. Short of that, it's just inherently supported by virtue of being a functional Windows program.
  15. To be fair, if DCS doesn't “support” directory links and — especially — junctions, it is just flat out incompatible with Windows and needs to be removed from the platform until that programming error is fixed.
  16. That would reintroduce the cheat that this change has gotten rid of where targets can be seen at utterly ridiculous distances. That cheat is gone for good reason and should never be allowed back in.
  17. It's pure speculation but by all accounts, this seems to be the biggest problem and snag at the moment. It feels like the game naively just looks at the rendering resolution — possibly just the display height — and scales the dot based on that without any awareness of or compensation for what kind of display it is targeting. So a VR display with a 2k vertical resolution for each eye gets fed the same dot size as a 4k display since, hey, it also has a 2160 pixel height, so it makes sense to let it be the same size. Or something. …except that, of course, the 4k display will be viewed from a meter or so away, whereas the VR display distance is better measured in millimetres. Ultimately, there probably needs to be a method of differentiating the two, like you suggest, and to try to compensate for the up- and downscaling. The latter could conceivably be handled by just having a different scale factor for “near-eye displays”, the logic being that the up- or downscaling of the dot size is inherently counteracted by the down- and upscaling of the rendering resolution so it all comes out in the wash. As for the slider idea, it's probably a reasonable one irrespective of the exploitation potentials even though that's a legit concern. You can largely do that anyway by adjusting the resolution and, more practically, by just leaning forward. There's no way to get around that so why not offer it as an option for the much larger audience where you want to dial in the size for your particular physical setup, and if some numpties want to cheat in MP, there's always the screenshot, kick and ban functions. Numpties will numpty and will undoubtedly figure out a way to tell the game to feed them the VR scale factor in pancake mode or some such so restrictions won't help much anyway. The benefits would probably outweigh the disadvantages. Some of it could be mitigated by having some fairly tight restrictions on how much you can adjust it. If it's just a ±1 pixel tweak, and you're mostly restricted by range as far as how early the dots show up, the worst excesses can probably be handled.
  18. If we're going to add transformation tools, let's also not forget the classics: align and distribute. Especially if we're imagining that the selection would also expand into the manipulation of flight plan waypoints and maybe even group nav points.
  19. Or go in the opposite direction to get a very similar result: Block 40 with all the extra systems and features that aren't available in later models, with the added benefit that it then also offers very different gameplay purposes since that's what those extra features were all meant to play into. Basically, yes — it's probably better to go big (in terms of difference) than to just add a little thing here and there that the player might not even notice because it's just one more thing in an already extensive drop-down list. Not that those extra bits and bobs wouldn't be interesting, but I can definitely see it being a hard sell, both store-wise and dev-wise. A wider-scope block pack going in a whole bunch of directions at once would be an enticing idea, though…
  20. So what? It's the tool that keeps the game alive — every improvement to it is worth any effort put into it. And it is not even remotely impossible regardless. All you'd want to do with that kind of mixed selection is move it around — i.e. change coordinates. So why would it require significant rewriting to implement what is essentially “find every conceivable thing with a coordinate within this box; add those coordinates to a stack; if dragged, apply translation equally to all items, possibly checking for illegal placements when the drag is completed”? It's shocking that something as simple and obvious isn't the default behaviour as it is, really. Especially when all the parts are already there. And that's if you do it the complex way with a clear intended use-case and not just by iterating through potential candidates and checking “can this be added to the list” as it if were automated shift-clicking. So no, the point is that the notion that something as trivial as that is somehow a monumental feat of programming is… very questionable. I have to wonder what the source for this claim is. Still utterly trivial. Everything we'd want to select has coordinates. That's all that really matters with a wide selection. The game already handles the issue of objects not being exactly the same by… well… simply not dealing with it. It just shows whatever is selected last. The game can already do multi-move for specific selections. The game can already mix and match to a high degree. It's a matter of list management — adding and removing and iterating through referenced items in that list. As horrible as Lua is, that's actually something it can do pretty well. It most certainly could be, and even if you aimed higher, it would still be an “intern's first day” kind of task.
  21. Not only is it not impossible — it's demonstrably possible by virtue of the fact that you already can do group selection, just in the most cumbersome way imaginable.
  22. I'd argue for both. Sometimes you want timing or challenges that require it to go one way or another, and letting the player choose removes some of that. Or just, the mission has a kajillion aircraft and you just want to have a switch to flip them all over to the preferred start. Other times, it should definitely be more of a player choice, and again you might want the default to be one start type if the player has no preference — but you still want it to be easy to switch universally — and they still have the option to override that. So +2 for a combo.
  23. Without having seen the mission, what it sounds like is that you're bumping up against the general behaviour of the Follow task, and possibly some misconfigured enroute tasking. Going back to the next waypoint when a “Follow” is completed is as expected, so the tricky part is figuring out why they consider WP1 to be the next waypoint. Especially if you've already manage to make them swap to two different intervening tasks. There's also the curious quirks of its stop conditions, where there is a built-in “Last Wpt”, but which only really works reliably when you use it to make AI follow or escort another AI. There's also the obvious difference between Follow and Escort, mainly in terms of what they can suddenly decide to do on their own. So… are you using Follow or Escort? Are you using this task as a waypoint task or as a triggered task? Do you have any stop conditions for the task or are you relying on Last Wpt? This is just off the top of my head, and I'd obviously have to experiment, but the way I'd set up what you're describing (at least if the escort is AI and they're meant to protect the player) is: WP1 – orbit over airfield, Flag 1A as a stop condition (and maybe some sensible duration). WP2 – orbit over the target area, maybe some search and engage in zone to keep the skies clear, but that might make things too messy. Flag 2A as a stop condition (and again, maybe some sensible duration) for everything going on there. WP3 – set up for landing (reset stuff like ROE, reaction to threat, jettison and AB restrictions etc). WP4 – land. As triggered actions: Escort player, Flag 1B as stop condition. Go to WP2. Escort player, Flag 2B as stop condition. Go to WP3. A bunch of fence in actions (ROE, reaction to threat etc). A couple of trigger zones: A small around the starting airfield -> when the player leaves, trigger Flag 1A; push the first escort task and all the fence-in stuff. Target area or IP -> when the player enter, trigger Flag 1B; push go to WP2. Orbit area for the escort -> when the player enters (or when enough ground targets are dead), trigger Flag 2A; push the second escort task. A bigger around zone around the landing field -> when the player enters, trigger Flag 2B; push go to WP3. The logic that should ensue for the AI is: Orbit the airfield until the player takes off. The orbit task ends; the escort task gets pushed to the top of an empty queue. The player enters the target area. The escort task ends, “go to WP2” (and orbit there) gets pushed to the top of an empty queue. The player finishes off the targets and/or rejoins the escort. The WP2 orbit task ends; the second escort task gets pushed on top of an empty queue. The player gets close enough to the landing field. The second escort task ends; WP3 gets pushed to the top of an empty queue. WP3 tasks get done; proceeds to WP4. Land at WP4. What I'm trying to achieve here is that the task queue is almost always empty so even if something breaks, the only thing for the AI to do is to proceed to the next waypoint. Worst case, the player is too darn slow and the escort leaves without them. Keep the waypoint tasks as lean and clean as possible, since the nature of the Escort task is that it they activate once, and then strictly go to the next in the queue, you have to equally strictly control that queue and ensure that there are no task left to be done way back where the escorting started — hence all the stop conditions. …and if nothing else works, Explode Unit (size 1) should do the trick, for certain values of “land”
  24. I'm not entirely convinced we “know” that. What your photo is showing isn't an Mk84 explosion — it's the aftermath of an Mk84 exploding in a built-up area, creating and throwing dirt, debris, and dust into the air but most likely also smoke rising form the house collapsing and/or being on fire. The bomb isn't doing all the work, and even if it's a huge contributing factor, it wouldn't look anywhere near that big without the buildings providing all the material to create that cloud. That doesn't mean that your sense of lack-lustre explosion effects is wrong, just that it might not be the bomb that is to blame. Rather, it's that targets don't really blow up all that spectacularly in the game, with a rare few exceptions. You might get a small fire effect and a friendly block vape party, but buildings don't generally contribute to the spectacle as much as they probably should. Compare what happens if you drop even a tiny firecracker on a “warehouse” building, and if you drop a MiG-21 nuke on some generic houses. One creates a spectacle because the building generates its own secondary-explosion effects; the other just creates instant ruins with little to no extra decoration. Basically, yes, DCS explosions could probably use a bit more oomph, but it's quite likely that a better way of going about it is by addressing the object destruction part of the equation rather than the actual bomb effect. Same end result — different ways of going about it.
  25. Oh yes, please! That and/or some kind of “interactable highlight” to indicate that you're mousing over some kind of control that might be hidden from the camera but which is actually fully accessible. That one would also help with some of the warbirds where you have controls hidden behind panels that should be easy to find… by touch. Which we don't have. The mouse cursor changing is a bit of a help, but that could be expanded so that the camera isn't the deciding factor in what can and can't be reached.
×
×
  • Create New...