-
Posts
2793 -
Joined
-
Last visited
-
Days Won
9
Content Type
Profiles
Forums
Events
Everything posted by Tippis
-
Nope. Well, maybe it “feels” like it, but it isn't one and doesn't behave like one. Other than one being dots by necessity, and the other having the option to also be dot-like, they are in every single way different from each other. One is a UI element and behaves like a UI element — user-controllable, sits on top of an apart from the world, is rendered in its own layer. The other is a simulation element and behaves like a simulation element — exists in the world, reacts to other in-world features, and is rendered as a world object (admittedly with backwards z-culling at the moment but that just reinforces the point). No amount of feelings changes this simple fact. Of course it can be removed. But doing so is a bug. That should give you a hint as to why they don't want to make it an option. The whole point of the exercise is to get away from the state of affairs where something as critical and universal as spotting can be turned on its head and abused for all kinds of unintended means.
-
Ok. Let's go over the basics then. Let's see where your expectations clash with how DCS does things. The reason we have the dots today is because without them, things are actually much much worse. It's been a long process, but ED have finally realised that their old spotting mechanics were worst-in-class and very very dumb. They are now trying to remedy that fact. DCS' rendering of 3D models is very simplistic. As long as something exists to render and might show up on the screen, it is rendered, This means there is no sensible upper bound for how far you can see stuff like airplanes, only a lower bound of how small they can be drawn before there is no point in drawing them at all. Only then does DCS cull them from the rendering. This has a couple of consequences: At high enough a resolution, planes are rendered out to maximum simulation distance. If you know where and how to look, you can see planes at 80nm. Eighty. Eight, zero. That is at the very least 10× farther than you should. Some would argue 20×. A more normal scenario still lets you spot them at 50nm if there is something to guide your eye — still an order of magnitude farther than you should. At low enough a resolution, planes are rendered very big at longer distances. You won't be able to see them as far out as in the high-res scenario (maybe “only” 30nm if you really push it and has everything in your favour), but you will see it very clearly because of how big your pixels are. It can't be rendered any smaller, and it hasn't crossed over the “don't render” threshold. At the very edge of rendering, the target may dip in and out of max range and flicker back and forth between visibility states. This makes them still faint, but very obvious at the distance where they should be the least obvious imaginable. This may be hidden by the use anti-aliasing, but that's yet another a client graphics setting that you don't know and can't control. You can use variable FoV to adjust those limits on the fly and cause target-revealing artefacting to occur on command. There is no telling what the other guy sees and how you appear to them. If you spot someone at 20nm, they may have had you clearly on their screen since you were 40nm away from them, or alternatively, they might not be able to see you for another 10nm. Not because of how good/clever/eagle-eyed either of the two of you are, but because that's what the game decides to show you. You have no way of overcoming this and no way of controlling this inequitability. Enter spotting dots. First of all, spotting dots are not labels. Anyone who says they are is objectively wrong and ignorant, and you can safely ignore everything they say because it will be equally clueless. Labels are UI elements. They may be shaped as dots or they may be a screen-filling detailed info dump on the target, but they are still just that: UI elements that put information on the screen. They're also fully user-configurable rather than a fixed thing that everyone sees so they don't even look the same to begin with, no matter how much some people desperately try to confuse them. Spotting dots, on the other hand, are part of the simulation — specifically of the cognitive system of perception. Spotting dots let us address the above issues with raw geometry rendering in a number of ways: You can set a controllable hard cap on how far out they are rendered, and set a controllable curve on how they fade into complete transparency. This is tied to an in-game parameter — distance — rather than out-of-game circumstances such as pixel count. If you want to be fancy, you can add in additional parameters like size and aspect. As such, the same rules will apply to everyone. You can fully replace the 3D model beyond a set and controllable distance, so that no matter what the client is set to, there is no geometric artefact left to be rendered and to cause all the above issues. This will also apply to everyone. You can control how and when that dots transition into full 3D models, and give that a good safety margin and fade zone so there is no flickering in and out of visibility. Again, same for everyone. Since it is always the same for everyone, you know that if you can see them, they can see you, and if you can't, then neither can they. Well… subject to differences in size and aspect, if those parameters are included in the process. You can set a fixed size for what is supposed to be the smallest thing the pilot can see. How this translates into pixel count will still be a function of display type and resolution, but that is at least controllable, rather than having the client decide if they want to see targets as tiny dots out to low-earth-orbit, or as huge dots out to beyond-BFR-missile-range. Since we suddenly have all that controllability, we can adjust visibility of far-off targets to match real-world data of what the eye can actually see. There is no concession for FoV — the size is the size, no matter how much you try to zoom in or out, and you won't see targets farther out just because you are roleplaying as a secret bird-eye-transplanet cyborg in this flight simulator. You cognitive process is weird, and rather than letting pure geometry and out-of-game circumstances dictate what you can and can't see, we can try to simulate how small things are simply lost in the noise, and how some noise can be processed into more information than the eye alone can pick up. Dots are more controllable and more realistic than pure geometry for the purpose of rendering targets at the very far edge of visibility. They let us quickly and easily simulate things that 3D models can't (unless we introduce distance scaling, which is politically verboten in this game for no sane or sensible reason). As a realism setting, they could possibly be considered comparable to g-onset effects — a bodily system and limitation that works non-linearly and weirdly, and we have a setting to just turn that noise off if we don't want to bother with it. But the difference that makes this comparison flawed is that g-effects don't change depending on how the client is set up. If they're on, they work in one specific way and you get the same outcome for everyone who has them on. If they're off, they also work in one specific way, and that is also the same for everyone who has turned the effects off. The raw geometry rendering you'd get if you turn dots off does not work like that. It's almost defined by how variable and different the results are for everyone. Note, this is what dots provide, not where they are currently at in DCS. We still have the problem that they are rendered at far too high distances, and that the dot size isn't fully fixed and equitable across displays. The transition from dot to model also need a second (third, fourth…) pass to make sense. In addition, as we have discovered, there are some very curious bugs that cause them to only be selectively rendered, which rather defeats the point of having everyone see the same thing. If you, as a single person, get different results, then it's clearly not ready for prime-time as far as making everyone else get the same results. Again, ED are trying to remedy this positively antique flaw in the game, and it is already for the better than the old system, but we are… ehm… not fully there yet, let's say. Ultimately, spotting dots will just be part of how the game simulates the pilot (hopefully well) and there's no reason to maintain obsolete and inherently broken code as an option. For the above reasons, it would have to be the other way around. For PvP, if the server owner thinks they want an equitable scenario, they need to be able to force them on. Otherwise, the client gets to choose how visible targets are to them, which is not going to fly in a PvP setting. If you can't force it on, you can't maintain all of those “same for everyone” points. Same goes for PvE servers, but from a different angle. If the PvE server owner thinks they want a realistic (or just controllable) scenario, they need to be able to force them on. Otherwise, they are removing a part of the simulation and get wildly nonsensical and unrealistic outcomes on the client end. If you can't force it on, you can't maintain all those hard caps and tweaks to match real-world performance and you have no idea what the player will see in your otherwise carefully crafted mission setup. If you want to make dots optional, and offer the ability to let players set up their own visibility rules, then sure, let's do that. But realise that that's what you're doing and the consequences of allowing that freedom. If you want controllability, the necessary “force” option needs to be in the direction of forcing them on because you can't leave that control to the player. In a small way, it's like letting the client decide g effects, but with infinitely more variable outcomes depending on how the client is set up.
-
And again, since they are part of the simulation and not a UI element, they can't be treated the same as labels. That's ultimately why it shouldn't be an option even for pancake users. Now, granted, there are some sim options that can be toggled, like wake turbulence and g-effects, but the difference there is that when you force them one way or the other, the effect is universal and not dependent on the client's settings — after all, that's the whole point of forcing them: so the client has no say. The spotting dots are so dependent on the graphics and display settings, and display type on top of that, that no such universality can be assured. Quite the opposite. The reason we are getting these new dots is because of that very issue, where there is no telling how it will turn out if you just let the geometry do its thing. The spotting dots are exhibiting similar issues, but unlike the raw geometry option, it's actually fixable. And… …that also explains why. Because it's just an interim thing that should ultimately go away for everyone. VR is in a sense just ahead of the curve.
-
Because the consequences aren't very desirable. Will you accept the pop-in? The scintillation as a target hovers at the edge of visibility? The complete uncertainty as to how visible you are to others relative to how well you can see them?
-
The problem is that this would have the exact opposite effect: the reason we're getting new dots is that the old ones are notorious for how completely uneven and random they are, ensuring that there is no chance of ever getting equitable results that are suitable for PvP. Unlike labels (which are a UI layer that the player can pretty much arbitrarily redesign unless you jump through significant hoops to stop it), the whole point of spotting dots is to try to make as equitable a solution as possible. It might not be fully there yet for a number of reasons, but ultimately, there shouldn't be any setting at all. Since the two are nothing alike, you can't treat them the same. It would be like if you could disable explosions on the logic that you can disable the BDA window and if you absolutely everything, you may come to the very nonsensical conclusion that they do the same thing. They don't, obviously, and that's why one can be a setting and the other one shouldn't be.
-
Across its entire field of view, maybe. If we completely misconstrue what a “pixel” is. What we're actually talking about here is a roughly 170×170° FoV that can resolve 1 MoA details, for a total of 10,200 minutes along each axis or, 104,040,000 MoA² on the face of it (but in practice less since the angular resolution isn't nearly as good in the peripheries). But that is also subject to contrast differences that may further ruin or enhance what we can notice, if not necessarily distinguish. But that's not what we're talking about. We're talking about what we can see through the frustrum of the display we're using and whether that can adequately present details that are as small, or smaller, than what the eye can make it. And it can. Because, again, it's a matter of distance and dot pitch, and whether those two combine to create a detail that is small than that 0.2909 mil angle. So no, it's not categorically false. It's just basic maths. But you're right in that zoom exists to compensate for the lacking pixel count of the display — it's just that it's in the opposite direction. What zoom does is that it lets you zoom out to see a wider FoV than what's natural for the display. It's not strictly necessary for showing details and zooming in could potentially be removed out entirely.
-
Not categorically stated like that, no. Also, what kind of weird setup is that? A 4k display is fully capable of rendering smaller details than the eye can perceive. Technically, so can a 1080p display — it's just going to be a bit annoying to play on. It's not a function of pixel amount but of distance and dot pitch. It's a matter of ergonomics and workspace, not technical capability. Funny thing is, the one that need to have it explain to them are the ones who can't even judge how wide their field of view is, or count the number of pixels in a 4k display. And most ironically, they're the one who refuse to read the explanations provided to them why their assumptions about resolutions are wrong. And then they blame the players rather than their own misapprehension of basic maths.
-
So they seem to be preferentially drawn, back to front? The farther the target, the higher the priority to dot it? That would explain… I hesitate to say “a lot” but definitely many things regardless. I wonder if this may be part of how and why people report seeing dots at much longer ranges than simple tests show: the simple tests don't have enough units to trigger any culling or preferential treatment, whereas “live” tests will have more units to choose from and end up picking the ones farthest away and making those much more obvious than the closer one. It would certainly explain some of the disappearing dot problems.
-
Yes, at this point, I'm very uncertain and/or confused by the connection between ground and air dots. I see the issue appearing in your OP tracks and missions; I see something completely different in the test setup I created, where ground units make no difference but air units makes it trivial to trigger. Could it perhaps be type specific? That some planes suffer more from it than others? Is it tied to target size — I used A-10As in my test, which are pretty chunky, and maybe that affects how (and when) the dot is rendered? Is it the same for ground units, where some types make it trigger more widely than others? Is it a “heavy processing” vs. “simple processing” distinction. Again, the ones I used are FC aircraft with their simple flight mode — simplified further by their being AI aircraft — plus the simplest trucks available, with literally no AI actions to speak of and very little in the way of graphical flourishes. And all of them were set to do absolutely nothing but exist in the world, oblivious to and ignorant of everything around them. Is the dot rendering part of the general unit processing or does it happen solely in the renderer? So many questions…
-
Good news. It's worse. Four different tests: Test 1: 200 air + 1200 ground units (.miz). Some of the front-line air units are missing their dots but exhibit scintillation, which could at this stage be conceived as being a result of them having shifted to full 3D models without any dots, Test 2: 200 air units (.miz). Same as above, the presence of those extra 1200 ground units don't seem to make any difference in how many planes are missing their dots. The scintillation is still there, and it's difficult to tell whether it's a numbers issue, a range issue, a transition issue, a "missing dots" issue. Test 3: 2x100 air units + 6x200 ground units, spawned dynamically (.miz). All units get their dots as they are spawned. The presence of all those ground units does not make the air unit dots go away (nor vice versa as far as I can tell). The extraordinary thing is what happens when we spawn in the second group of 100 airplanes - suddenly the first 100 spawned lose their dots and go into scintillation mode, same as was seen in tests 1 and 2. Potentially, at this point, the ground units are actually too close and don't get dots, so it's the air units alone that cause the issue. Test 3: 2x100 air units + 6x200 ground units, spawned dynamically; air groups are split into left and right (.miz). The issue goes away for all units. The ground groups are farther away and are definitely at dot distance (same as the planes). The planes on the show their dots and don't lose them when the group on the left is spawned in. There are no disappearing dots as view is zoomed in and out, but a smooth(ish... as close as it gets) transition from dot to model. So... It can't be the total amount of dots, because then test 2 (only 200 units) should not exhibit the issue. It can't be the ground units (alone), because then test 1 and 2 should not show the same dot issue, and/or test 1 and 4 should behave differently. It might be the total amount on screen (compare test 1 and 4), but why is the first group of planes affected and not the second? Also, it happens with 200 units on screen, but not with 1400, which rather suggests that it's not a unit total. It might be a unit order thing in that the first airplane group is the one that consistently loses its dots when that happen and those are placed first in the mission and defined first in the file structure. What is the scintillation? These were recorded without any anti-aliasing - is this the 3D model popping in and out of render distance at the given zoom level, which is only visible when the dot is not hiding it?
-
You should probably actually read the post in question to learn why they don't. And why that's a problem. Not really, no — that's a legacy from many many years ago that lives on because we're used to having something that inherently unrealistic be part of our simulation. If it were to be implemented today, it would be to make up for the fact that your viewport is limited and variable (in 2D). In VR, resolution is to some extent a limiting factor, but that's rapidly going away. In time, the only need for zoom will be to zoom out.
-
That seems very likely, unfortunately. That, on top of confusion with labels, the inherent differences with resolution and physical setup, and of course the whole VR-vs-pancake distinction means we're looking at so many parameters to determine what you see (or more importantly don't see) that it requires some pretty darn rigorous testing to show anything of value. Just running some random mission that isn't specifically designed for it won't necessarily show what you expect it to. This is an important next step: to try to set up different views and scenarios. I also wonder if it's a matter of the game having a “dot buffer” that can get filled up if you ask it to deal with too many units, and the reason we're seeing what you're illustrating is because ground units are processed first and thus fill this hypothecated buffer before it gets to the aircraft. So a few cases to test: Does this happen when there are lots of ground units in the mission, but they're behind the camera? Does this happen when there are only a few ground units, but they're all in (theoretical) view of the camera? Does this happen when there are no ground units, just an equal amount of air units? If so, the same question again: does it only happen when those air units are in front of the camera, or can there be anywhere in the world and still make the game “run out” of dots? Can it be triggered on the fly by adding/removing units? Can it be triggered on the fly by changing how many units are in view? While they don't have dots, can it be triggered by just adding a crapton of static units? Case 3 is interesting for the whole notion of there being a maximum amount of dots in total, and the game can run out of them. If that's what's happening, it should be possible to make it happen with just air units — the only difference is that ground group tends to be (much) larger than air groups so it's more natural that those fill out the allotment sooner. Case 5–7 is mostly a sanity check along the logic of, is there some kind of draw limit we're nudging up against and if so, can that limit be reached without any active units at all? Is it a case of the rendering iterating through a unit list and putting dots (or not) on top of them, but it just quits after while, and if so, what is it that affects the construction of that list? Is it units in view? Stuff in general in view? Or just the full list of what's placed in the ME? A particularly worrying scenario would be if it's a limit that is not just an effect that depends on how many units it needs to highlight, but also if stuff like statics or even ground decorations count. It would be an absolute mess if it varied with terrain, for instance, so that high-detail maps (or map areas) would be more susceptible to triggering this error than others.
-
Quoting this just to highlight it. A pretty darn important complication if it happens universally — massively important if it only happens selectively.
-
That's probably pretty hard to get around in quadview if the dot size is a direct function of the (local) rendering resolution. The size in particular would have to be targeting different sizes depending on where it is and it being a blurred smudge is almost a feature rather than a fault, so it would rather come down to how QV is implemented and what can be adjusted on the fly as objects move in and out of the focus area.
-
It's relevant enough. The problem you're having is that you maths is horribly incorrect as shown previous. The FoV is wrong, the resolution is wrong, your distance is wrong. The monitor is fully capable of providing accurate acuity — you just don't let it because you've set it up to let you see targets that you're not supposed to see. If you actively do everything you can to circumvent what the game and hardware is capable of, you forfeit any right to suggest that it's not possible. Wow. Every part of this is wrong. That is almost impressive. Zoom does not address the limitation he's discussing. In fact, zoom just makes that limitation worse where it exists. But more importantly, if you want to counter his argument you just need to demonstrate that no, actually, monitors are entirely capable of providing that level of detail. In fact, the problem we're having at the moment (and the reason why you're clamouring for the removal of the measure to fix that problem) is that they can show more detail than the eye should be capable of. Zoom isn't even there to solve that problem, but to accommodate for the fact that you as a player are forced into a more restricted FoV than you should have. Zoom doesn't scale anything up. It alters your field of view and creates awkward foreshortening. You'd know this if you had any clue about the topic at hand and in particular if you had any experience with the VR size of things. Try zooming in there and tell me it's “less awkward”. Even in pancake, that foreshortening is no less awkward than if you were to employ scaling since it will only be applied to targets that are too small for you to even notice how it looks in relation to anything else. The only time zoom should make things larger is when you zoom out, counter-intuitively enough. I wouldn't be surprised if this is a limitation in how “things” in general are rendered. As in, any active unit gets one treatment, whereas statics and decoration gets another — all of those just happen to count as active in some sense and thus get the full spotting process applied to them.
-
That's a fault of implementation, not of technique. It's no different than how the old system equally incorrectly let you spot airplanes at 40nm. The only way to eliminate unrealistic far view limits is to not use pure trigonometry and perspective on the 3D model. Now, you can take your pick on how you want to cap it: do you want it to blink in and out of existence, and hope that no-one ever notice the very obvious pop-in? Do you want to employ a scaling factor to shrink it down to zero size before the model itself actually reaches that state? Do you want to replace it with a dot that can be faded in a controllable manner? Do you want to scale it down to dot-size, and then use the controllability of the dot to take care of the fading? Those are your options. It's not a question of hiding the 3D model behind visual effects — it's a question of how not to render it at all. If you render it, it can be seen. The bad thing about that model is that it's not controllable, that it is not equitable, and that it is subject to zoom. And above all, it is the wrong size. Atmospheric attenuation is needed regardless, and will not actually create the limitations that need to be in place for a realistic solution. Having attenuation on top of a corrected size provides a more realistic outcome than applying it to something that is inherently wrong. None of the things you are asking for are left out — they're still as needed, but they need to be applied to a correct base object rather than as an attempt to compensate for incorrectness. That's just it: they don't. It hits very differently depending on the display system and resolution. It is also entirely possible to cut the unrealistic uses — dots and scaling do that inherently. The best solution is to not convey any advantage to anyone.
-
A lot of focus is on the VR dot thread now, and this is more for the pancake solution since that's easier to calculate, but as a reference and illustration for the problem we're having, I made this: The graph shows three lines: blue is the naive geometric solution for what kind of footprint a head-on F-16 should have (or, more accurately, its bounding box — a fair percentage of that should be empty, but that's a later precision that can be added). The orange line is what is usually the case in DCS, as people scan around at full zoom. The green line is some pseudo-ideal solution of what we probably should be seeing to make the whole thing properly realistic. Next to the area axis are illustrations of what a dot of that same footprint would look like. Note that about 20 pixels, it has transitioned in this graph from a dot to an actual “model”, with all the fuzziness this entails — in an actual implementation, this transition need to happen much sooner than that, at maybe 6 or 8 px². This all assumes that we've tweaked our setup so that 1px = 0.3 mils, i.e. the smallest thing the naked eye can see. Ultimately, with the way the aspect should be a lot less visible than the full bounding box would suggest, this graph actually overstates how visible the plane would be, and obviously, you can always “cheat” by sitting too close so as to make the individual pixels more clearly visible. But in a way, this overstatement actually drives the point home further… What the graph shows is why we can't have dotless solutions: because the 3D model reacts to zoom, and because it actually becomes more visible than the data suggests that the brain (rather than the eye) can handle. Ideally, a plane of this size should be all but invisible at 10nm. The raw 3D model solution would still render as a very visible 2px blob at this point. This means we can now zoom in, and suddenly, that 2px dot grows to a massive 12 pixels. Once seen, it can be fairly trivially be tracked out to 20+ nautical miles. The unzoomed model suffers a similar problem but in the opposite direction. At ranges where a pilot should be able to track the target and determine its general aspect, the resulting rendering is still too small and dot-like to convey that information. And of course, then there's the issue of what zoom as a function is there to provide: the ability to focus on details in and outside the cockpit, but also give a full wide FoV on the outside world, but also to let you do both at once without any substantial loss of information. It's a player convenience to compensate for how you can't do everything at once on the screen, even when you should be able to. So a couple of things are needed to make spotting work properly. One is to almost constantly and dynamically counteract zoom. Zooming in should not allow you see further. Zooming out should not make you lose “obvious” contacts. That orange curve needs to be massively flattened on this end, but almost inverted on the bit that falls outside of the top of the graph. There should be a maximum range at which a target of a given size will be rendered at all. Definitely for the orange (zoomed) curve, but as we can see, the same actually applies to the blue “1:1” curve — it, too, needs to be hard clamped much closer in than the trigonometry suggests. To achieve the green line, we need two facilities: a dot to take over before we even get to 10km so it can be forcibly hidden at a controllable range (and being controllably faded out to that point). A scaling function to counteract zoom as we transition from dot to 3D model, but also to provide the detail we still should be seeing, and which zoom would normally provide, but which needs to be toned down significantly from that extreme. …and then, of course, all of that needs to be scaled and faded appropriately to match other resolutions and screen distances (i.e. anything from VR to a good old 1080p monitor at arm's distance). Unfortunately, only through the use of scaling can the rendering be good and realistic. Without it, the model will be both massively too large and far too small depending on the range segment. We can't rely on trigonometry alone to make the full range of spotting sensible.
-
The reason you are wondering this is because you don't want to read the explanations why it is the only way it will work. There are at least three more states that you are wilfully ignoring: If the 3D model is too big for how visibly it should be rendered at that range, the dot can replace it and be set to be appropriately visible. If the 3D model is too small for how visibly it should be rendered at that range, the dot can replace it and be set to be appropriately visible. If the 3D model is at such a low LoD that out-of-game parameters such as resolution start to affect how visible it is in-game, the dot can replace it and make it uniformly visible. Basically, the flaw is that you incorrectly assume that the 3D model will always be the right size, and is universally rendered the same size on different hardware. The reason we have dots is because at beyond maybe 3-4nm, it isn't. Since the 3D model cannot be relied upon to show a correct and uniform size, something more controllable needs to be used. The dot is that something.
-
Thank you. That at least makes it look like the “centre” might be the centre of the bounding box for the model rather than what one might intuitively think of as as the centre of the aircraft. So the tail fin pushes the whole thing up a meter or three, and the dot is projected in the middle of empty space. At least that kind of sort-of-correct-but-not-what-you'd-expect maths would explain why it's offset. Hmm… time to experiment.
-
Good news: the spotting dots are intended to get rid of exactly that kind of thing, and going back to the old state when this was standard is not exactly going to solve anything. It was even worse before. Again, there's a reason why some people want to go back to that state so they can get their nonsensical advantages back. Nice! Thank you. That's a pretty good illustration even if it doesn't fully match the HMD optical effect. Do I understand it correctly if the darker upper/left dot is the label, and the faded lower/right one is the spotting dot? It looks like they are doing what they should in terms of general appearance — the right colours, very different fading schemes etc. It's just that the alignment is way off on the label. I'm beginning to wonder if maybe the alignment sits on the wrong plane in the binocular rendering, so it gets confused about what point in space it should be centred on. It's a tricky question, and I understand if it's too fine a detail to really tell, but would you say that the mirror picture is more like the right or the left eye? Just theorising, really, but still… And I suppose we could also get back to that age-old question of what even counts as the centre of the target itself. If it's trying to pick the cockpit as a centre point, that would of course also shift the dot, but it shouldn't be by that much. Could you check what your DCS World\Config\Views\Labels.lua file looks like, and in particular the block defining local function NEUTRAL_DOT? In particular, there should be a line that sets the actual dot that says res[last_x] = {"·","CenterCenter",0,opacity,0,2}. Or does yours say anything else? Because everyone is posting images from when the target is far off in the distance. You say there's a huge problem when they're close-up as well, and it would be nice to see an illustration of that, especially since neither labels nor spotting dots should even show up at that distance. But as YoYo's picture shows, they are a bit aggressive close in so the question is one of exactly how close is that still the case? As such, it's important to figure out of those are out of whack or if you're seeing some other kind of artefacting or rendering error. Thank you. That's horrible. And very different from what I'm seeing. So yay. Yes, there really doesn't seem to be any need for a dot at all at that range. I'm guessing this is with labels off from the colour and shape of that dot so we're not dealing with any interference from that? And no generative rendering like DLSS?