amalahama Posted November 11, 2017 Posted November 11, 2017 Yeah, but the low res of the sensor and the high FOV makes the NAVFLIR useless for small air-air contacts. Well, maybe if they are really close... The IRST in fighters have powerful optics with high augmentation in order to detect IR signals far away. But it's not the case of the NAVFLIR. Regards
Bullitthead Posted November 22, 2017 Posted November 22, 2017 (edited) The hotspot detector look for acute temperature differentials between the spot and the background. On a cold ground any hot spot will be "seen" hence the false targets. In the air it is a bit harder because air is a good heat sink so temp differentials tend to be less acute. One of the reasons why the first iteration of IR missiles were rear aspect only. I'll read the IR section of the TAC MAN more carefully so I can determine if air target spotting is feasible. I think I found a nice video that does show air target spotting is definitely feasible. At 36:00 minutes into this video you can see an aircraft on the left side of the HUD being tracked by the IR hotspot tracker. Can't really tell what the effective range is for the air contact though. The furthest ground target, (buildings?), appeared to be picked up about 2 miles out. Edited November 22, 2017 by Bullitthead
Jenrick Posted January 19, 2018 Posted January 19, 2018 Sorry for the bit of necro'threading, but I had a question regarding the current DCS IR implementation. Currently when I select the IR mode for a given sensor, the display is basically just a B/W filter/conversion of the normal texture with no additional modification to the render? The main reason I'm asking is that it wouldn't seem to be difficult to simply have an IR texture layer/map that is used specifically for the IR sensors. Instead of displaying green grass in a field, it displays as black (assuming WHOT, and you really would do just a literal color flip to BHOT) etc. Vehicles could have an IR map that while static, shows an hot engine bay, etc. Does this loose some of the nuance of real IR, absolutely; we're also moving along at 150 knots (at least) over the terrain, I can handle the grass not having dynamic IR shading. This is the same concept as how the texture to the actual object is selected and rendered, the highest rez texture isn't loaded and rendered until something is within draw distance for the render engine to do so (I'm hoping at least that DCS implements texture switching based on draw distance). There is no need to identify objects, parse a list of vehicles/buildings/etc in the render queue, etc, just simply display the IR map on everything being draw in the sensor window vs their normal map. Depending on exactly how ED's graphics engine works, this could be very computationally intensive, or it could literally add no overhead. As I don't work for ED, it's beyond my knowledge to say. Now, this would require ED to basically add an IR map to everything in game, OR to simply have the default "cold" texture be used for anything that's not hot. As a quick hack, it'd probably look ugly, but it would only require creating IR maps for a couple hundred ground units. The IR map wouldn't need to be detailed at all, so it'd be a quick job. Not knowing how many assets in game use the same texturing for terrain/buildings/etc, it might even be a quick job for ED who knows. Now is this an accurate IR sensor simulation, heck no! However it will work for about 99% of what anyone needs an IR sensor to do in game. It doesn't take into account weather, time of day, crossover effects, sensor range, etc. What it would do however is create a zero processor overhead (in the sense of the need to calculate, locate, etc each instance) method of portraying IR energy as it is released from the source. Cold ground and hot engines, now have high contrast when viewed from the correct angle; from the wrong angle, the mass of the tank might shield the engine making it much harder to detect. The sensors range, contrast limits, etc can now all be handled in a separate manner acting on this IR layer to tweak sensor performance in game, and is based on the contrast of the objects against each other and the background just like how a real IR sensor works (and yes I'm fully aware there's a lot more that goes into an IR sensor, but that's close enough for what we need in game). How to implement this with the NAVFLIR hot spot finder, I'll have to ponder, but a lot of the question revolves around how the real NAVFLIR works algorithm wise. If we generate an IR map for everything, then the issue becomes how to simulate the actual parsing of IR sources that the NAVFLIR does. -Jenrick
shagrat Posted January 20, 2018 Posted January 20, 2018 Sorry for the bit of necro'threading, but I had a question regarding the current DCS IR implementation. Currently when I select the IR mode for a given sensor, the display is basically just a B/W filter/conversion of the normal texture with no additional modification to the render? The main reason I'm asking is that it wouldn't seem to be difficult to simply have an IR texture layer/map that is used specifically for the IR sensors. Instead of displaying green grass in a field, it displays as black (assuming WHOT, and you really would do just a literal color flip to BHOT) etc. Vehicles could have an IR map that while static, shows an hot engine bay, etc. Does this loose some of the nuance of real IR, absolutely; we're also moving along at 150 knots (at least) over the terrain, I can handle the grass not having dynamic IR shading. This is the same concept as how the texture to the actual object is selected and rendered, the highest rez texture isn't loaded and rendered until something is within draw distance for the render engine to do so (I'm hoping at least that DCS implements texture switching based on draw distance). There is no need to identify objects, parse a list of vehicles/buildings/etc in the render queue, etc, just simply display the IR map on everything being draw in the sensor window vs their normal map. Depending on exactly how ED's graphics engine works, this could be very computationally intensive, or it could literally add no overhead. As I don't work for ED, it's beyond my knowledge to say. Now, this would require ED to basically add an IR map to everything in game, OR to simply have the default "cold" texture be used for anything that's not hot. As a quick hack, it'd probably look ugly, but it would only require creating IR maps for a couple hundred ground units. The IR map wouldn't need to be detailed at all, so it'd be a quick job. Not knowing how many assets in game use the same texturing for terrain/buildings/etc, it might even be a quick job for ED who knows. Now is this an accurate IR sensor simulation, heck no! However it will work for about 99% of what anyone needs an IR sensor to do in game. It doesn't take into account weather, time of day, crossover effects, sensor range, etc. What it would do however is create a zero processor overhead (in the sense of the need to calculate, locate, etc each instance) method of portraying IR energy as it is released from the source. Cold ground and hot engines, now have high contrast when viewed from the correct angle; from the wrong angle, the mass of the tank might shield the engine making it much harder to detect. The sensors range, contrast limits, etc can now all be handled in a separate manner acting on this IR layer to tweak sensor performance in game, and is based on the contrast of the objects against each other and the background just like how a real IR sensor works (and yes I'm fully aware there's a lot more that goes into an IR sensor, but that's close enough for what we need in game). How to implement this with the NAVFLIR hot spot finder, I'll have to ponder, but a lot of the question revolves around how the real NAVFLIR works algorithm wise. If we generate an IR map for everything, then the issue becomes how to simulate the actual parsing of IR sources that the NAVFLIR does. -JenrickWell, that would give us a nicer FLIR view while likely upping RAM requirements (another set of textures for everything in the mission), but this thread is about the hotspot detector feature of the HUD, not about a more realistically looking IR image. https://forums.eagle.ru/showthread.php?p=3274852 The real challenge is to locate these "hotspots" (vehicles AND terrain features so we have the false positives, like in real life) on the map and then translate them into a V-marker on the HUD based on the vector of the pilot head to the hotspot. Shagrat - Flying Sims since 1984 - Win 10 | i5 10600K@4.1GHz | 64GB | GeForce RTX 3090 - Asus VG34VQL1B | TrackIR5 | Simshaker & Jetseat | VPForce Rhino Base & VIRPIL T50 CM2 Stick on 200mm curved extension | VIRPIL T50 CM2 Throttle | VPC Rotor TCS Plus/Apache64 Grip | MFG Crosswind Rudder Pedals | WW Top Gun MIP | a hand made AHCP | 2x Elgato StreamDeck (Buttons galore)
Jenrick Posted January 20, 2018 Posted January 20, 2018 So the simplest option if everything in game had an IR map would be to just emulate exactly how the hot spot detector works in real life. I'm sure there's a tech manual out there somewhere ;) Just kidding. Without knowing specifically how the hot spot detector works, lets make a couple of conjectures: 1) It probably use some kind of filtering to help eliminate false positives, this could be something as simple as early AIM-9 IR filters, which physically blocked certain IR waveslengths that were only found in flares, to something way more complex. Probably the later. 2) The system is probably designed to look for something that has a contrast from the background IR levels. This is basically a given I'd say. This is probably one of the settings that can be tweaked. 3) The system may look for certain sizes and shapes of IR reflections to indicate potential signatures. A single flare in the middle of a Siberian filed in winter, is certainly a hot spot, but is it something the pilot will care about? A refractory tower while stupid hot, is probably too hot to be something the pilot is looking for (a burning vehicle would be the closest though it could be a launch signature too I suppose). This is probably one of the settings that can be tweaked. 4) The system is primarily of use and setup to help locate difficult to see/hidden targets. Rather then indicating that, yes, there is a giant plume of hot exhaust coming from a power plant. 5) Based on the posted videos, it does evaluates and discards/adds hot spots continuously (not really a conjecture unless we're all misinterpreting the videos above). Now if we wanted full fidelity we'd simply design the system to take all the above into account, and basically reverse engineer the hot spot detector. Problem solved! However that's not an efficient use of design time, way over engineered, and the DOD/MOD might get REALLY bothered by someone doing that. Assuming we have IR textures for all in game assets, we can start of with simple contrast detection. We can automatically parse the list of possible candidates by simply discarding any polygon/triangle/pixel (depending on exactly how the engine handles this, could be all three or just 1) of less then a certain cutoff brightness (I'm going WHOT here). This cutoff point could be dynamically generated per refresh of the sensor based on the average IR brightness of the whole scene. Now I can determine if a given polygon/triangle/pixel has a brightness that is higher then what it's adjacent to (ie contrast). Anything that doesn't generate contrast is generated from the list. This could honestly be cheated pretty effectively by saying only candidates that are greater then X% above average are "contrasty". Sure we could loose some that actually have perfectly fine contrast, but in general it'd work about right and do it quickly. We now have a list of possible "Hot Spot" candidates, each of which has been vetted as being above the average IR energy of everything in the FOV (ie the "cutoff"). If we have an open field with 1 tank in it, we probably only have the 1 hot spot candidate. Someone will complain that they want some additional hot spots. My question is why? In the real world if I have the same set up bare dirt, and a running tank, I'd have to really have the sensitivity dialed up to throw up more false positives. This would be very counter productive to actually finding and killing things. So is it possible, sure, but is highly unlikely (for instance in the weather selection I can't set it to rain frogs, or have plagues of locust, which if we want 100% fidelity should be addressed). Lets take a look at a more common case though, which would be, some buildings, a little water, and a few vehicles. Now we may have lets say a hundred or so polygons/a thousand or so triangles/20-30,000 pixels (again based on how the render engine is able to break things out, and I'm only referring to those being drawn by the engine) that are candidates by virtue of being above the cutoff. How do we sort through all that? In principle, we're hunting for vehicles, not buildings. We don't need notification on large targets, yes Virginia the power plant exhaust is hot, but it's way too big to be a tank at any reasonable distance. Will this remove a tank from being a hot spot candidate when it fills 90% of my HUD FOV? Sure but you have bigger problems then (I'm pretty sure you're aware there's a tank there as right before you slam into it). So we simply say a poly above a certain dimension/so many contiguous triangle/so many contiguous pixels are all removed from the list. Pixels would be the most computationally intensive to determine, but still not at at any measurable increase in overhead to figure out the bounds of any given group of pixels. Now we have a list of hot spot candidates that are above the cutoff, in size bounds, and have contrast. What we have left should be a fairly reasonably sized list of lets say 25 possible hot spots polys/triangles/pixel groups. In the real world how this list gets curated is where the engineers make their money, and the DOD slaps classification levels on everything. For a "feels about right" method we can easily fudge a few things. 1) Vehicle IR textures are the only ones using a true #FF/255/however the colors are listed (pure white). We build in a away to automatically determine what is a vehicle. 2) Shiny reflective things have the next brightest value (simulating a solar reflection), power plant stacks the next highest, etc. Basically we cheat and bias the system towards showing vehicles. Now take that list of 25 candidates (so we'll display a bit under half of them as hot spots), take the value of the brightest one (lets say an actual vehicle), anything within 10% of that actual value (so in this case E6/230/whatever), and throw those in a list. If we have more then 10 (or whatever number of hot spots are selected to be shown), randomly select 10 and put them out there. If we have less then display the 10 percent list, and just sequentially go down the list and put up markers. You will certainly get false positives, you will certainly miss real tracks at distance, and in general you will get a fairly reasonable hot spot tracker that's not god like. Run a hot spot update over the course of every 1/4-1/2 second seems about right. You may see more flickering and jumping then in the real videos if you have a ton of similar IR values, but overall I think most of the time it'd work pretty close. TL/DR If we have static IR maps (gradient black to white #00-FF/0-255, WHOT) of everything in game: 1) Determine average brightness of everything in the FOV of the sensor. 2) Anything that's below the average brightness is tossed. 3) Anything that's left must be above a certain limit to generate contrast against the background (user selectable or fixed, either way same idea), if it's not it gets tossed. 4) What's left, is checked to see if it's too big (user selectable or fixed, either way same idea). If it is, it gets tossed. 5) What's left is checked to see if it's too small (user selectable or fixed, either way same idea). if it is, it gets tossed. 6) What's left is rank ordered by brightness. 7) Take the top ten percent and randomly select ten (or whatever the setting for the number to display is) of them and show them as hot spots. If the top ten percent is less then the number of hot spots to display, then just display the top 10 brightest items as hot spots. -Jenrick
Jenrick Posted January 20, 2018 Posted January 20, 2018 Ideally, when ED upgrades the IR simulation they will include a mechanism that applies realistic/believable temperature values to every surface in the game. From there, it should be up to the individual sensor model to determine how it interprets and displays that environment information. It would be unfortunate if we get one set of IR textures, because that would only accurately reflect the capabilities of the one sensor used as the reference. Like radar cross section, IR properties depend on what wavelength is being sampled. An object looks very different in NIR than it does in LWIR. Excellent point. However temperature then requires some kind of IR modeling that uses that number to generate value for the given type of IR detection being used. This could be very computationally intense. As video cards get better and gain more memory, various IR textures could accomplish the same thing without the processing overhead. To skip ahead to your last point though, having one of the cores simply do IR processing could solve this problem though! Moreover, for all the discussion about polling object lists and populating matrices with 3d position information, I think that with an environmental simulation as I described above, none of that is necessary at all. Real EO/IR/CCD systems know nothing of the 3d position of the objects "seen". They simply see contrast, and image processing techniques are used to identify and track targets. For instance, the Maverick missile has a sensor which provides video output with x rows and y columns. The tracker algorithm is implemented in software, and it looks for differences in contrast of adjacent pixels under the crosshair. When commanded to track, it gradually expands its focus until it finds the lateral and vertical contrast boundary, just like that video posted a few pages back showing the white object tracker. If the detected object meets the minimum size (must be at least 'n' rows) and contrast requirements, then the object is tracked. If not, the pointing cross flashes and the crosshair disappears. As far as HUD symbology goes, again 3d position is not needed for that. Everything from CCIP reticles, Sidewinder reticles, Maverick TD boxes, and BATA/BATR/FEDS symbology is drawn on the HUD without knowledge of 3d target position. All it needs is azimuth and elevation. Absolutely agree. The only thing that matters is the literal 2 dimension display of what the sensor is seeing. This simplifies the problem considerably. I'd like to see real image processing running on a separate core for these tasks. Once that happens, we can finally have realistic simulations of EO and IR target detection and tracking. Even flares could finally work correctly. I'm not a computer scientist, I've always wondered why we continue to buy 4 and 6-core computers for the past decade when it provides almost no benefit to gamers. Anyway, it would be nice if we ran radar, ECM, and EO/IR stuff using all that processing power we have idling in the background while we melt Core 0. Amen! -Jenrick
shagrat Posted January 21, 2018 Posted January 21, 2018 Ideally, when ED upgrades the IR simulation they will include a mechanism that applies realistic/believable temperature values to every surface in the game. From there, it should be up to the individual sensor model to determine how it interprets and displays that environment information. It would be unfortunate if we get one set of IR textures, because that would only accurately reflect the capabilities of the one sensor used as the reference. Like radar cross section, IR properties depend on what wavelength is being sampled. An object looks very different in NIR than it does in LWIR. Moreover, for all the discussion about polling object lists and populating matrices with 3d position information, I think that with an environmental simulation as I described above, none of that is necessary at all. Real EO/IR/CCD systems know nothing of the 3d position of the objects "seen". They simply see contrast, and image processing techniques are used to identify and track targets. For instance, the Maverick missile has a sensor which provides video output with x rows and y columns. The tracker algorithm is implemented in software, and it looks for differences in contrast of adjacent pixels under the crosshair. When commanded to track, it gradually expands its focus until it finds the lateral and vertical contrast boundary, just like that video posted a few pages back showing the white object tracker. If the detected object meets the minimum size (must be at least 'n' rows) and contrast requirements, then the object is tracked. If not, the pointing cross flashes and the crosshair disappears. As far as HUD symbology goes, again 3d position is not needed for that. Everything from CCIP reticles, Sidewinder reticles, Maverick TD boxes, and BATA/BATR/FEDS symbology is drawn on the HUD without knowledge of 3d target position. All it needs is azimuth and elevation. I'd like to see real image processing running on a separate core for these tasks. Once that happens, we can finally have realistic simulations of EO and IR target detection and tracking. Even flares could finally work correctly. I'm not a computer scientist, I've always wondered why we continue to buy 4 and 6-core computers for the past decade when it provides almost no benefit to gamers. Anyway, it would be nice if we ran radar, ECM, and EO/IR stuff using all that processing power we have idoling in the background while we melt Core 0.There are multiple discussions about why it is not simply "use my idling CPU Cores", I will not start this, again. As discussed in detail in this thread, the challenge is to get a positional 3-D vector from each "hotspot" in relation to the players airplane, as we render a 3D environment, before it is flattened into a 2D picture that is shown by a monitor or two monitors in VR. For example the Maverick Seeker does not look at the picture from the monitor and does not need to care about things like "where does the HUD frame start/end" or "which parts of the screen are actually outside and inside the cockpit". It simply looks for contrast on the whole screen. So in theory we could calculate another layer of the world, with IR values for textures. Actually pixels in an additional texture for everything from ground to object textures, say a grey value that represents temperature. I'll leave the additional texture passes to do this out of the equation, for the time being. We just assume it is easy to do and doesn't impact performance. Now if you identify the temperature value at a specific bunch of pixels in a texture how do you calculate the 3D position of the spot? You need to reference the texture to the underlying reference grid, that requires to actually check all pixels in the visible area around the planes coordinates, get the IR value and coordinates into a list, then filter for the IR values that are above the threshold, then calculating if the identified hotspots are in the FoV and put a "V" Marker onto your HUD on the correct vector between your virtual head and the hotspot, as the marker is independent from the textures. Now you need to do this at least every couple frames... As Zeus67, already mentioned even going through just the few (in comparison to the pixels) map objects hits the framerates tremendously, so currently it definitely isn't simple. On the contrary just the map objects stalls the whole simulation, so it is a no go. No offense, but don't you think ED wouldn't have "changed" the way DCS simulates IR already, if it were "so easy". It is very unlikely they lack the " knowledge" how IR works in real life, or don't know how this can be represented. It is more like a scientist who knows that you can easily get close to lightspeed if you simply accelerate the crate to 299.700 km/sec by burning a rocket with the adequate amount of fuel for the appropriate time. In theory that is simple, unless you try to build a rocket that manages to do this, that is. ;) Shagrat - Flying Sims since 1984 - Win 10 | i5 10600K@4.1GHz | 64GB | GeForce RTX 3090 - Asus VG34VQL1B | TrackIR5 | Simshaker & Jetseat | VPForce Rhino Base & VIRPIL T50 CM2 Stick on 200mm curved extension | VIRPIL T50 CM2 Throttle | VPC Rotor TCS Plus/Apache64 Grip | MFG Crosswind Rudder Pedals | WW Top Gun MIP | a hand made AHCP | 2x Elgato StreamDeck (Buttons galore)
shagrat Posted January 21, 2018 Posted January 21, 2018 Assuming we have IR textures for all in game assets, we can start of with simple contrast detection. We can automatically parse the list of possible candidates by simply discarding any polygon/triangle/pixel (...) What list of polygons would you parse? There is no list of which pixel is on a polygon in the memory that can give you a coordinate, may be you can put the three points of the polygon into a list that you need to store in memory, by the way, before you can parse it. As discussed the challenge is to get a 3D-Vector from a pixel, without a huuuge performance hit. (...)Lets take a look at a more common case though, which would be, some buildings, a little water, and a few vehicles. Now we may have lets say a hundred or so polygons/a thousand or so triangles/20-30,000 pixels (...) Ehm, do you have any idea, how many polygons a normal scene contains? ...and pixels is simply your screen size height multiplied with width so 1920*1080 = 2.073.600 pixels for each frame. TL/DR If we have static IR maps (gradient black to white #00-FF/0-255, WHOT) of everything in game: 1) Determine average brightness of everything in the FOV of the sensor. 2) Anything that's below the average brightness is tossed. 3) Anything that's left must be above a certain limit to generate contrast against the background (user selectable or fixed, either way same idea), if it's not it gets tossed. 4) What's left, is checked to see if it's too big (user selectable or fixed, either way same idea). If it is, it gets tossed. 5) What's left is checked to see if it's too small (user selectable or fixed, either way same idea). if it is, it gets tossed. 6) What's left is rank ordered by brightness. 7) Take the top ten percent and randomly select ten (or whatever the setting for the number to display is) of them and show them as hot spots. If the top ten percent is less then the number of hot spots to display, then just display the top 10 brightest items as hot spots. -Jenrick And we still need the coordinates of these 2D hotspots to put the markers onto the HUD, as the HUD is an Object placed between the camera position (your virtual eyes) and the ground in a 3D render scene, before we can render the screen output... Shagrat - Flying Sims since 1984 - Win 10 | i5 10600K@4.1GHz | 64GB | GeForce RTX 3090 - Asus VG34VQL1B | TrackIR5 | Simshaker & Jetseat | VPForce Rhino Base & VIRPIL T50 CM2 Stick on 200mm curved extension | VIRPIL T50 CM2 Throttle | VPC Rotor TCS Plus/Apache64 Grip | MFG Crosswind Rudder Pedals | WW Top Gun MIP | a hand made AHCP | 2x Elgato StreamDeck (Buttons galore)
Jenrick Posted January 21, 2018 Posted January 21, 2018 As discussed in detail in this thread, the challenge is to get a positional 3-D vector from each "hotspot" in relation to the players airplane, as we render a 3D environment, before it is flattened into a 2D picture that is shown by a monitor or two monitors in VR. For example the Maverick Seeker does not look at the picture from the monitor and does not need to care about things like "where does the HUD frame start/end" or "which parts of the screen are actually outside and inside the cockpit". It simply looks for contrast on the whole screen. So in theory we could calculate another layer of the world, with IR values for textures. Actually pixels in an additional texture for everything from ground to object textures, say a grey value that represents temperature. I'll leave the additional texture passes to do this out of the equation, for the time being. We just assume it is easy to do and doesn't impact performance. Now if you identify the temperature value at a specific bunch of pixels in a texture how do you calculate the 3D position of the spot? You need to reference the texture to the underlying reference grid, that requires to actually check all pixels in the visible area around the planes coordinates, get the IR value and coordinates into a list, then filter for the IR values that are above the threshold, then calculating if the identified hotspots are in the FoV and put a "V" Marker onto your HUD on the correct vector between your virtual head and the hotspot, as the marker is independent from the textures. I think that's where we're kind of talking across each other. Yes I understand that the graphics engine/render does this all in 3D. However what I'm discussing is extending the engines functionality, or simply doing this as a post rendering process. Now I'm not sure specifically how DCS engine is written, what areas are easy to dig down into etc. For example I could very easily write a simplistic program that ran over the top of DCS, and in a particular part of the display it would convert everything to B/W, and go through and sort pixels as I discussed. I could do this at a very high level using plug and play components to do this for me. I could go WAY down and actually literally look at individual pixels as they are rastered onto the display. I could decide I'm looking for individual pixels, I could look for 1" blobs. Overall it's a very simple thing to do. I could make whatever I decide I'm looking for (let say 1/4" blobs or some reasonably close to approximation for your monitor resolution) turn bright red. That's all we need with the hot spot detector. Hell Photoshop can do it for me frame by frame, take a screen shot, send it PS to auto-process, resend it back to DCS, and have it display. Computationally unfeasible? Absolutely, however the principle is identical. I get the engine probably doesn't support this type of thing as native functionality. Would it be a HUGE kludge to have an overlay just pinned to the 4 corners of the FLIR display (both hud and MFD), and scale and squeeze as the angle of view changes, oh heck yes. However it would work. Would it be an elegant solution? No. Would it be a good solution? No. But it would be a solution. In this case you're asking me to get my Kerbals to the Mun and only use basic parts; not get to light speed. Sure it can be done, but it just ain't gonna be pretty. I do understand that ED probably doesn't want/need that sort of kludge stacked on top of their rendered scene. I get that it'd be a decent bit of hassle for a specific sensor in a specific aircraft. However the question was asked HOW, not if it's a good idea. -Jenrick
shagrat Posted January 21, 2018 Posted January 21, 2018 I think that's where we're kind of talking across each other. Yes I understand that the graphics engine/render does this all in 3D. However what I'm discussing is extending the engines functionality, or simply doing this as a post rendering process. Now I'm not sure specifically how DCS engine is written, what areas are easy to dig down into etc. For example I could very easily write a simplistic program that ran over the top of DCS, and in a particular part of the display it would convert everything to B/W, and go through and sort pixels as I discussed. I could do this at a very high level using plug and play components to do this for me. I could go WAY down and actually literally look at individual pixels as they are rastered onto the display. I could decide I'm looking for individual pixels, I could look for 1" blobs. Overall it's a very simple thing to do. I could make whatever I decide I'm looking for (let say 1/4" blobs or some reasonably close to approximation for your monitor resolution) turn bright red. That's all we need with the hot spot detector. Hell Photoshop can do it for me frame by frame, take a screen shot, send it PS to auto-process, resend it back to DCS, and have it display. Computationally unfeasible? Absolutely, however the principle is identical. I get the engine probably doesn't support this type of thing as native functionality. Would it be a HUGE kludge to have an overlay just pinned to the 4 corners of the FLIR display (both hud and MFD), and scale and squeeze as the angle of view changes, oh heck yes. However it would work. Would it be an elegant solution? No. Would it be a good solution? No. But it would be a solution. In this case you're asking me to get my Kerbals to the Mun and only use basic parts; not get to light speed. Sure it can be done, but it just ain't gonna be pretty. I do understand that ED probably doesn't want/need that sort of kludge stacked on top of their rendered scene. I get that it'd be a decent bit of hassle for a specific sensor in a specific aircraft. However the question was asked HOW, not if it's a good idea. -JenrickThe engine can do this already, it is basically how FLIR images and camera views on MFDs are rendered. Each additional pass hits frame rates, though. And the problem we discuss is how to identify a spot on the ground and get its position to put a marker on the HUD, all without doing a couple CPU cycles before the final render passes can generate the frame from the scene. I found an old video showing how the HUD is simulated in DCS. The key is that it is a texture object in the 3D scene between your eyes/head/the camera and the 3D world outside the cockpit. You need to render the markers before you have the 2D image. You sometimes noticed this during early access with some modules when the HUD borders where not adjusted and parts of the HUD showed outside the "glass". Shagrat - Flying Sims since 1984 - Win 10 | i5 10600K@4.1GHz | 64GB | GeForce RTX 3090 - Asus VG34VQL1B | TrackIR5 | Simshaker & Jetseat | VPForce Rhino Base & VIRPIL T50 CM2 Stick on 200mm curved extension | VIRPIL T50 CM2 Throttle | VPC Rotor TCS Plus/Apache64 Grip | MFG Crosswind Rudder Pedals | WW Top Gun MIP | a hand made AHCP | 2x Elgato StreamDeck (Buttons galore)
Jenrick Posted January 21, 2018 Posted January 21, 2018 I'll start out with this: I get what you're saying to have the carat's appear on the HUD in the normal manner. I have to have them linked to a point in space so engine can put them on the HUD glass so they show up in the correct spot regardless of the eye point in the engine that the scene is being rendered with so that the engine can render the white V. Fair enough. However I'm saying this COULD be done as a post effect. What list of polygons would you parse? There is no list of which pixel is on a polygon in the memory that can give you a coordinate, may be you can put the three points of the polygon into a list that you need to store in memory, by the way, before you can parse it. As discussed the challenge is to get a 3D-Vector from a pixel, without a huuuge performance hit. The engine at some point in it's workings finalizes the list of polys/triangles that are going to be draw in the given scene versus clipped out. Correct? So we now have a list of all the polys/triangles to be drawn. Hell we can even bound the area that is the HUD or FLIR MFD, and if those aren't included, then we get to skip all this rigmarole as we'll never need to check it. Also if you really want to get in the weeds we actually have the full listing of pixels that are going to be displayed in XY format with a color/brightness value in the VRAM too. Ehm, do you have any idea, how many polygons a normal scene contains? ...and pixels is simply your screen size height multiplied with width so 1920*1080 = 2.073.600 pixels for each frame. I will clarify, that was in reference to what is within the FOV of the sensor display, again this is a post effect. Yes, and my numbers are based on the primitives used for longer LOD distances (I'm assuming a rectangular building in DCS at the edge of visibility isn't using 25K vertices for instance), not a HD close range model. Also yes I know pixels are based on screen size, Dell's newest 5K HD display tops out at about 14.7 million pixels. I simply choose to list polys/triangles/vertices/pixels to cover all the bases on however the engine chooses to look at what it's rendering. Also remember we've already dropped anything under the scene average brightness, and whatever percentage is below the "contrast cutoff". On a true random distribution of values from 0-255 (black to white), yes we'd have about 37.5% of the poly/triangles/pixels left to evaluate assuming a 25% contrast ratio which is WAY low. In general the majority of the rendered scenery isn't going to be anything close to a random distribution, and you're probably going to want 70%-80% contrast at a minimum. And we still need the coordinates of these 2D hotspots to put the markers onto the HUD, as the HUD is an Object placed between the camera position (your virtual eyes) and the ground in a 3D render scene, before we can render the screen output... Partially correct, partially incorrect. Yes to render the scene in beautiful fluid 3D, all the above has to occur. To go in an after all that is buffered and is waiting to be pushed down the pipeline; check a portion of the given frame in the manner I've described; and stick a couple of white pixels forming a "V" on top of certain pixels, non of that is required. -Jenrick
Jenrick Posted January 21, 2018 Posted January 21, 2018 (edited) The engine can do this already, it is basically how FLIR images and camera views on MFDs are rendered. Each additional pass hits frame rates, though. And the problem we discuss is how to identify a spot on the ground and get its position to put a marker on the HUD, all without doing a couple CPU cycles before the final render passes can generate the frame from the scene. If that's how the FLIR MFD image is generated.... Add your white marker carat's there in a nice simple 2D display, and then that's copied to the FLIR HUD display. For day time use, simply have the FLIR overlay only bring over the carats. Am I missing something? The FLIR image in the MFD is identical to the one projected on the HUD correct? I'm also not trying to be argumentative here, it's a lot different solving the a problem when you don't have the tech specs in front of you for what you're trying to work with. -Jenrick Edited January 21, 2018 by Jenrick
shagrat Posted January 21, 2018 Posted January 21, 2018 I'll start out with this: I get what you're saying to have the carat's appear on the HUD in the normal manner. I have to have them linked to a point in space so engine can put them on the HUD glass so they show up in the correct spot regardless of the eye point in the engine that the scene is being rendered with so that the engine can render the white V. Fair enough. However I'm saying this COULD be done as a post effect. The engine at some point in it's workings finalizes the list of polys/triangles that are going to be draw in the given scene versus clipped out. Correct? So we now have a list of all the polys/triangles to be drawn. Hell we can even bound the area that is the HUD or FLIR MFD, and if those aren't included, then we get to skip all this rigmarole as we'll never need to check it. Also if you really want to get in the weeds we actually have the full listing of pixels that are going to be displayed in XY format with a color/brightness value in the VRAM too. I will clarify, that was in reference to what is within the FOV of the sensor display, again this is a post effect. Yes, and my numbers are based on the primitives used for longer LOD distances (I'm assuming a rectangular building in DCS at the edge of visibility isn't using 25K vertices for instance), not a HD close range model. Also yes I know pixels are based on screen size, Dell's newest 5K HD display tops out at about 14.7 million pixels. I simply choose to list polys/triangles/vertices/pixels to cover all the bases on however the engine chooses to look at what it's rendering. Also remember we've already dropped anything under the scene average brightness, and whatever percentage is below the "contrast cutoff". On a true random distribution of values from 0-255 (black to white), yes we'd have about 37.5% of the poly/triangles/pixels left to evaluate assuming a 25% contrast ratio which is WAY low. In general the majority of the rendered scenery isn't going to be anything close to a random distribution, and you're probably going to want 70%-80% contrast at a minimum. Partially correct, partially incorrect. Yes to render the scene in beautiful fluid 3D, all the above has to occur. To go in an after all that is buffered and is waiting to be pushed down the pipeline; check a portion of the given frame in the manner I've described; and stick a couple of white pixels forming a "V" on top of certain pixels, non of that is required. -JenrickWell, if it's that easy, ED will sure figure it out and implement it. :dunno: Shagrat - Flying Sims since 1984 - Win 10 | i5 10600K@4.1GHz | 64GB | GeForce RTX 3090 - Asus VG34VQL1B | TrackIR5 | Simshaker & Jetseat | VPForce Rhino Base & VIRPIL T50 CM2 Stick on 200mm curved extension | VIRPIL T50 CM2 Throttle | VPC Rotor TCS Plus/Apache64 Grip | MFG Crosswind Rudder Pedals | WW Top Gun MIP | a hand made AHCP | 2x Elgato StreamDeck (Buttons galore)
SUNTSAG Posted January 27, 2018 Posted January 27, 2018 Please let me start by saying I am not an ULTRA realist when it comes to DCS. Where the Harrier is concerned (an airframe I have a history and affinity with), I would prefer to see the TCIs implemented at some point, regardless of whether this means finding a way to add heat signatures to elements or whether the use of Vec3 points is feasible. It does not matter that it would provide an advantage and as some have said "make things too easy" It's simply a function of the airframe and its avionics suite. For me it does not matter if it replicates the real thing 100% even reduced functionality is better than none at all. So I hope that between both Razbam and ED, a working solution can be found......I have my fingers crossed, thanks. Callsign: NAKED My YouTube Channel [sIGPIC][/sIGPIC]
Shabi Posted January 29, 2018 Posted January 29, 2018 Well, if it's that easy, ED will sure figure it out and implement it. :dunno: Just add an IR component to all shaders, then take a rendered framebuffer and use tensorflow on it to do some sort of convulution filtering. EASY! hah. maybe. Oculus CV1, i7 4790k @ stock, gtx 1080ti @ stock, 32gb PC3-19200 @ 2.4ghz, warthog & saitek pedals, razer tartarus chroma.
Shrike88 Posted January 29, 2018 Posted January 29, 2018 I would be happily content for the meantime adding the functionality of it tagging Active vehicles. Since this is already modeled into the current engine and will carry over to 2.5 and is a function of the active code. Aircraft is WIP as it is so if not too extremely difficult paint the active vehicles and then based on how it performs and changes, can be augmented in the future. Also this feature in the aircraft is able to the burned on an off anyways.
Fri13 Posted January 29, 2018 Posted January 29, 2018 Amazing things: 1) The system is simulated by implementing it. 2) The system is working by 100% detecting only the active ground units. Now, it is yes unrealistic there ain't false readouts, but we just need to accept that, just like we have accepted unrealistic missile performances, radar performance, TGP performance etc etc to this date. The good news are, in the future things can be changing when new features are added to DCS. So things can be done more realistic. I could accept some random false V tags to be placed around, but it already is as well more difficult to implement than not having them, so I would take "über FLIR" version over some that you spot to be fake at some point. i7-8700k, 32GB 2666Mhz DDR4, 2x 2080S SLI 8GB, Oculus Rift S. i7-8700k, 16GB 2666Mhz DDR4, 1080Ti 11GB, 27" 4K, 65" HDR 4K.
DragonFlySlayer Posted January 30, 2018 Posted January 30, 2018 I seem to have a problem scrolling with the mouse wheel inside the Harrier Cockpit to view zoom in and out. Outside the aircraft I can zoom in close or further back, but inside the cockpit with my head tracker I can look all around fine, but cannot scroll forward or back away using the mouse wheel. In the TrackIRv5 upon opening I can scroll forward or back. Does anyone know or are familiar with this. Is it normal with the Harrier? Thanks Marc
Ratfink Posted November 19, 2019 Posted November 19, 2019 Any news on when the NAVFLIR hotspot detector will be implemented at all? CORSAIR 5000D AIRFLOW Mid Tower | AMD RYZEN 7 9800X3D | G.Skill Trident Z5 Neo EXPO RGB 64GB (2x32GB) DDR5 PC5-48000C30 6000MHz | ASUS ROG X870E-E GAMING WIFI | Gigabyte RTX5080 Gaming OC 16GB | 4TB Lexar NM790 M.2 PCIe 4.0 | Seasonic Prime TX-1000 1000W 80 Plus Titanium Modular Power Supply | Lian Li Galahad II Trinity AIO 360mm | Meta Quest Pro | TM HOTAS Warthog | Saitek Combat Rudder Pedals | Win 11 Home | Asus PG348Q 34" 3440x1440 Monitor | Bose Companion 3 2.1 Sound
Harlikwin Posted November 20, 2019 Posted November 20, 2019 Any news on when the NAVFLIR hotspot detector will be implemented at all? Wow, way to necro a thread. At a guess, and mind you its a guess. Not in the near future, but maybe once ED releases the next engine version which i think will have the improved IR rendering stuff. As we have seen they are willing to "update" or redo modules (i.e. the recent MLU for the M2k). So I wouldn't expect it, but maybe someday. New hotness: I7 9700k 4.8ghz, 32gb ddr4, 2080ti, :joystick: TM Warthog. TrackIR, HP Reverb (formermly CV1) Old-N-busted: i7 4720HQ ~3.5GHZ, +32GB DDR3 + Nvidia GTX980m (4GB VRAM) :joystick: TM Warthog. TrackIR, Rift CV1 (yes really).
moespeeds Posted December 16, 2019 Posted December 16, 2019 2+ years later and we still don't have this very key feature implemented.... Moe "Moespeeds" Colontonio vVMA 231 http://www.vvma-231.com/ Looking for a serious US based Harrier Squadron? We are recruiting!
Harlikwin Posted December 16, 2019 Posted December 16, 2019 2+ years later and we still don't have this very key feature implemented.... Hush, the white knights will tell you there is nothing wrong with the harrier and you need to adapt and overcome. And then the brits will show up and call you a moaner... New hotness: I7 9700k 4.8ghz, 32gb ddr4, 2080ti, :joystick: TM Warthog. TrackIR, HP Reverb (formermly CV1) Old-N-busted: i7 4720HQ ~3.5GHZ, +32GB DDR3 + Nvidia GTX980m (4GB VRAM) :joystick: TM Warthog. TrackIR, Rift CV1 (yes really).
Cunctator Posted December 17, 2019 Posted December 17, 2019 A realistic implementation of the hotspot detector might actually depend on the new FLIR model ED is developing together with the ATFLIR pod for the Hornet.
Harlikwin Posted December 17, 2019 Posted December 17, 2019 A realistic implementation of the hotspot detector might actually depend on the new FLIR model ED is developing together with the ATFLIR pod for the Hornet. Yeah I agree. I remember reading Zeus's post as to why it is hard with the current "model" if you want to call it that. Hopefully it could be something good if the new model doesn't suck. New hotness: I7 9700k 4.8ghz, 32gb ddr4, 2080ti, :joystick: TM Warthog. TrackIR, HP Reverb (formermly CV1) Old-N-busted: i7 4720HQ ~3.5GHZ, +32GB DDR3 + Nvidia GTX980m (4GB VRAM) :joystick: TM Warthog. TrackIR, Rift CV1 (yes really).
Sarge55 Posted December 17, 2019 Posted December 17, 2019 I hope it is something you can turn off or declutter. The HUD gets very busy as it is with the RWR lollipops and then to add hots spots will just make it worse. Unless someone can point me to a way to turn off the Lollipops without turning off the RWR. [sIGPIC][/sIGPIC] i7 10700K OC 5.1GHZ / 500GB SSD & 1TB M:2 & 4TB HDD / MSI Gaming MB / GTX 1080 / 32GB RAM / Win 10 / TrackIR 4 Pro / CH Pedals / TM Warthog
Recommended Posts