Jump to content

Tippis

Members
  • Posts

    2528
  • Joined

  • Last visited

  • Days Won

    9

1 Follower

About Tippis

  • Birthday 01/01/1870

Recent Profile Visitors

10917 profile views
  1. In a flash of… lesser sanity, I decided to set up a simple RNG collection thing: Random.miz to just see how it behaves when you ask for a lot of random numbers. Basically, it sets five flags at mission start and then reads back their values 2 seconds in. And I think that, while on the hole it's probably ok, there still some oddness going on that may be related to a kind of sample bias more than anything. Usually for mission design purposes, you use maybe a handful of random values and the spread of them is low because it's a lot of effort to design umpteen different outcomes for each test. The thing is, the DCS random generator seems to love to repeat numbers when looking at small series. That's not something that's technically wrong from a statistical standpoint, but it's perhaps not what the designer actually wants. I ran two tests, one where it rolled three numbers in a row, and one where it rolled five (this is the version linked above). Then I ran those until I had 100 rolls in total, and looking at that total, it looks... entirely sane. But looking at each set of rolls, it gives results that create annoying outcomes in the mission. (In spoiler tags because they're long lists of numbers and would look horrid if just shown as continuous text) Average result for the first test was 2.88 with a standard deviation of 1.53. Average result for the second test was 2.98 with a standard deviation of 1.50. Pretty darn close to the expected outcome for the entire population. But then we look at what each set of rolls produces: A whole bunch of sets with low deviation - two sets in the 3-roll variant with the same number repeated for all rolls. Again, not technically wrong, but annoying for creating a feeling of variation and randomness. Some of this could probably be fixed by scripting up what is probably what is more often the desired outcome: a specific set of numbers, but in random order, or at least some kind of weighted function where previous results are less likely to reappear. I.e. not random at all in the literal sense, but more appealing to the intuitive sense of what a random pattern should look like.
  2. I have this sneaking suspicion — and absolutely nothing to support it, just to be clear — that this is the heart of the matter: the randomisation happens once, some time during the start, and then you're just given a fixed stream of psuedo-random numbers that depends on that first seed. That it's not calling some kind of system randomisation that pulls from a shared random pool at the system level, but rather that your mission pulls its string of numbers from its own separate stream so what matters most for getting different outcomes is how far into the mission you've gone and how many other randomisations have happened before that. So if that initial random seed doesn't rely on a lot of entropy, you get very similar results early on. But that's the annoying part about randomisation: good luck proving it! After all, [1 1 1 1 1 1 1 1 1 1 ] is a perfectly reasonable string of random numbers.
  3. I don't remember seeing it changed at any point, but the conventional wisdom and interpretation of the stats is that it's basically a die roll for each countermeasure activation, with different missiles having different sensitivity factors being applied to that die roll. Hence the issues a while back with “pulsing” ECM, for instance. For the most part, that whole “different sensitivity” is by such small margins that it requires a lot of controlled shots to notice any difference, whereas what really makes a difference — as you hint it — is just volume. It doesn't matter much if a missile is twice as good at ignoring countermeasures if it faces 20 die rolls compared to a missile that is half as good but only faces one or two countermeasure launches. Even with the former being more resistant, it's facing an almost 100% chance of failure simply because of how many opportunities it has to fail, whereas the latter may only yield a 50% failure rate after all is said and done. An AI that dumps everything it has will thus be impervious to even the most advanced missile, whereas a player who runs some lazy-pace release program or who manually taps away one or two at a time can sill quite easily be hit.
  4. …so they still see the model when they eject.
  5. No, they released it for testing. They've since asked for feedback. That's why we have this thread. They've also offered up a means to go back to the old and infamously broken system as a way to compare how old and new yield different results (and because some preferred the broken one for some reason or another). So this is a pretty rich and hypocritical statement from someone who has keep bemoaning how the beta test client was being used… Yes there is. You're just asking it to solve a part of the spotting solution that it is not meant to cover. That doesn't make it a bad solution for the part that it does handle. That's like complaining that there is no good solution for how the gears-up bind doesn't open your canopy. It's not meant to. The ways in which a dot is the only solution for parts of the spotting equation have been described extensively: to cover up the just-within-WVR range bracket where you can't use a 3D model because it would show too much and would be inequitable between resolutions. Just because it doesn't address middle-WVR identification issue doesn't mean it doesn't work for the range bracket where its functionality is needed. …and yet, here you are are, asking it to be massively increased to wholly unrealistic levels by effectively arguing for unlimited spotting range rather than the cap that the current solution sets on things. Or at the very least you're asking for the return to the old system where that cap was vastly higher than it is now. Why is that? And before that, you were adamantly in favour of the old system because it was also somehow realistic and you went on the same tirade against “gamers” who wanted to see spotting improved, when in actuality, you were arguing in favour of the most game:y and least realistic implementation on the market. You also strenuously argued against the other solutions to the spotting issues that would solve the WVR parts of the problem and make them more in line with real-world experiments and data, on the grounds that this would somehow also be unrealistic. You have no idea what ED wants. Other than that they have explicitly stated for years now that they're looking for an improved spotting solution. And now we have one. Also, you once again need to realise that if there even was something to “fix” with a mission setting, it would be the exact opposite to what you're asking for: a flag to force the new dots on, since that means everyone sees the same thing, as opposed to the old system where player setting massively changed what was visible to whom, causing the very problem you're saying you want to avoid. But ultimately, there is nothing to fix — eventually the new system will be tweaked, the old system will just go away, and the supposed problem will no longer exist.
  6. Quite the opposite. By not letting it be turned off, it has been left in a workable, non-exploitable state. If it could be turned off, we'd be right back to the old situation where you'd be able to spot planes at essentially maximum simulation distance — an order of magnitude farther out than would be reasonable — but wholly dependent on your graphics settings. So whoever set their graphics options “right” would have a ridiculous and thoroughly exploitable advantage over anyone who didn't know better (or who simply couldn't). It would be worse than ever, and the old dots were already much worse than the new ones in that regard. By no intelligent, sane, or rational measure could that ever be conceived of as an “improvement”. Sorry, you won't get your 50nm “works as intended” targets back, no matter how much you preferred them to the vastly more realistic outcome we get now. It's time you give it up and stop trying to make people not give the feedback ED is asking for. Fortunately, exactly nothing of this is true. It's quite impressive really.
  7. Eh, no. Well, yes, everything is based on pixels on a pixel-based display system and no-one is really rocking the vector displays or line plotters of old. But no, the dots are based on resolution, where higher resolutions get larger dots in terms of pixel count. Cripes. The end goal is to make the apparent size as resolution-independent and equal as possible.
  8. Note the trick to SUNTSAG's mission: only the first embark is done via the Embark waypoint task, because that's the only way that task works. Similarly with the first disembark. Everything else has to be pushed onto the units and thus relies on having timing triggers and making sure everyone involved is in the right place at the right time so that it can actually be resolved, or the units will just freeze up and not do anything. Hence the need for delaying actions and stop conditions and timing the waypoints out so that you know for certain that the task to be stopped happens in the right order in the queue. If and when you introduce players into the mix, that will of course create its own problems since they don't really respond to AI commands and conditions. And heaven help you if you mix up "TASK PUSH" and "TASK SET". The scripting solution basically works the same, with directly setting commands for the AI units, only it does so in a much more readable way. You can create a single list of conditions as to what should happen when to which units, rather than having to bounce between groups and N different trigger setups.
  9. Short answer, you're asking too much. The built-in tasks work ok… ish… for two specific scenarios: You want some background/decorative activity on an airbase while the player gets started up in their aircraft. You want to run a very brief helo transport mission where troops run up to you and jump in, and then you dump them somewhere — end of mission. The problem is that both the helo's load and the ground units' embark tasks only work on the first waypoint. As in, it's the first thing they do as soon as they go active. They can unload/disembark at later stages (although, for the ground units, that will always be waypoint 2 since they obviously can't do anything while loaded in the helo). But that's it. Like many special waypoint actions, they were made once upon a time to do a single thing, and have never been revisited as the game has grown and added more units or more helos or more situations that you can create with other functions added to the game (see, for example, everything involving AI JTAC). What you want can be done, but you'll need to go the scripting route and find a framework or package that is tailored to your needs, with the scripting telling the units what to do and when rather than the built-in waypoint actions. I've long since lost track of what works best since others have been doing the heavy lifting on the helo mission creation side, but back in my day, CTLD was the go-to option for most of that stuff. No guarantees as to whether it still works or is sensible any more.
  10. That's also true for Viggen landings on land. It's kind of part of how it achieves its STOL characteristics. Except land is most likely a much harder surface, hit at higher (vertical) speed, than you'd get from a carrier landing. And land isn't not moving to let you land at a lower relative speed. There is no reason why a Viggen's landing gear should break on a carrier landing, much less on a carrier doing-nothing-at-all-and-just-sitting-there.
  11. …and if you were to slam a Viggen into the deck of a carrier at a 5m/s sink rate, it would also not break.
  12. It's wholly irrelevant whether it was intended to or not. There is no reason for the landing gear to crack just because it comes in contact with a carrier deck. If it does, it means something in the carrier code or the viggen code is broken and needs to be fixed. Any argument along the lines of “don't” or “it shouldn't be there” is just a lazy excuse for not fixing bugs. Doubly so with a plane with such a ridiculously sturdy landing gear…
  13. Right, and in combination with the somewhat opaque and occasionally broken ways to adjust your cockpit camera position, you have a good recipe for a never-ending stream of “this cockpit feels wrong”, even in cases where it's really very accurate, but you're just viewing it from the wrong position or angle or (in some cases with suitably wrong settings) the wrong FoV. So you adjust things to make it feel right again, without actually fixing the correct underlying problem, and suddenly the whole world around the plane is wrong instead.
  14. Nah, that's just Sharpe in general. When reality goes against his assumptions (and reality always goes against his assumptions), he always either goes for the ad hominem, or insists that his assumptions about reality are true and everything else — especially real-world data and science — is unrealistic. Just keep slapping him around with irrefutable facts and show the contradictions in his increasingly convoluted logic and you'll end up on his block list because he has no other way of combatting the unrelenting nature of reality. It's not fixed, as such — you can adjust the in-game distance. But it's also not adaptive in the sense that it could conceivably read the settings off of the headset and adjust itself accordingly. And you might not even find the correct setting so it might as well have been. If anything, there is an on-going debate as to whether it should be more fixed so that a single setting will apply accurately to all aircraft, since there is a perception that the scaling is different from one module to the next. But as pointed out, that scaling isn't something the game does… at leat not in DCS — it's how the brain interprets any deviation between in-game camera distance and real-world IPD (on top of a couple of other “familiarity cues”). It's also something the brain flat out assumes is equal in all directions (and it's actually right, since no scaling happens: everything does indeed stay at the same relative positions in all three dimensions simply because nothing changes). So a 10% error along that single axis between the eyes translates into heights and distances feeling 10% off as well. When you adjust “world scale” in many VR games, what you're actually doing is adjusting that in-game distance, and possibly the camera height as well, to match what your brain expects.
×
×
  • Create New...