Jump to content

Jenrick

Members
  • Posts

    160
  • Joined

  • Last visited

Everything posted by Jenrick

  1. Boom and zoom is what you've got. If you are initiating a head on pass, real world you screwed up in letting the the bandit get co-altitude and his nose around to face you. For DACM it's good to do this occasionally to see how to get out of it. I'd start out practicing from a classic 6 o'clock ambush. Either high or low. Come screaming it at the speed of heat, one pass, and hopefully their smoking wreckage is the end result. Once you get that down (which due to a lower closure rate isn't as tricky as other options), switch to 3/4 attack (7-8 o'clock and 4-5 o'clock) then directly abeam, moving to quartering from the front and finally, head on. Practice high and low, co-altitude teaches you nothing but bad habits and is exceedingly rare in practice. Head on is the hardest due to the massive closure rate, you have little time to setup, stabilize, and engage. If your initial engagement didn't work, extend, extend, and extend some more. You want to create enough space via your speed to be able to setup in you preferred engagement geometry again. You'll probably end up with a head on pass (particularly with the AI), but you should be able to dictate an altitude advantage at the least. In furball, pick a fresh victim, and go for it. -Jenrick
  2. Make sure Force Feedback is unchecked in your settings. -Jenrick
  3. You might skip hitting clr, no clue why it would change anything, but I never do that and it works fine for me otherwise. -Jenrick
  4. Jenrick

    Mig-19 hype?

    I toss out that far more people would be happy to have a dogfight between some variant of the F4 and some variant of the MIG-21, then there are people who would be bothered by the fact they aren't variants that faced each other in reality. -Jenrick
  5. I have no clue where people are getting the idea the FM isn't affected by wind. It takes 2 minutes to setup a scenario to test it, and see that that's completely incorrect. Moderate winds drive me crazy trying to get attack rocket attack runs in with the L model. -Jenrick
  6. Also flares, I haven't tested them lately to see if they're working in 2.5, would cause a CTD in 1.5.8. -Jenrick
  7. Agreed, interesting read. There is definitely a different feel to the 2.5 FM over the 1.5.8 FM. I feel as confident in 1.5.8 as one using the keyboard for the rudder and collective can. In 2.5 it's a LOT different, and in general "twitchy'er". To the OP, you are in 2.5 or 1.5.8? -Jenrick
  8. Just to provide a little more info on the Zuni, as noted they are a 5" rocket. The name Zuni has only ever referred to this system, the LAU-10 is the designation for the 4 round launcher. The F-8 crusader actually just stuck them on the Sidewinder rails in a two packs (2 rockets per rail), I don't recall the designation for that off the top of my head. The Zuni has a MUCH larger warhead then the Hydra series of rockets. The M151 2.75" rocket HE-Frag warhead contains 2.3lbs of Comp B explosive. The Mk 63 5" rocker HE-Frag warhead contains 15 lbs of Comp B explosive. The general description and effect of a Zuni rocket is air launched 155mm artillery shell, were the 2.75 is about an 81mm shell or so. Iron bombs carry far larger amounts of explosive though. Even the lowly Mk 81 carries 96 lbs of tritonal (which is functionally the same as comp b), which is almost the full weight of a Zuni rocket (motor and warhead). The Mk 82 carries 192 lbs of tritonal. Blast effect, hands down it goes to the bombs. The actual effect on target is going to depend on fuzing, burst height, etc., but in general the larger and heavier cast bomb body is going to create larger and heavier fragments that will travel farther. A HE-Frag rocket body will create more numerous but smaller and lighter fragments that travel a lesser distance. Pretty much the same effect you see between an HE artillery shell and a hand grenade. The Zuni is frequently cited as being far more accurate then the Hyrda (large rocket, larger motor, better aero?). In Vietnam for the SEAD mission Crusaders would put a pair of Zuni's in a flak pit or an SA-2 sight (individual launcher pit) and consider it a mission kill. Hyrda's are shot in at least half pod salvo's (quite often a whole pod), just to ensure they got enough rockets in the general area of the target. Even with all the whizbang computers in the AH-64 Apachge, Hyrda's are considered area effect weapons, and in general at least 6 pairs of rockets should be fired at a given area target. Also DCS's damage model really nerfs Hydra's even more. A HE-Frag Hydra has a 5m-25m casualty radius (troops in cover vs troops in the open), which is largely the result of fragmentation. Which DCS doesn't model. Land a Hydra a few meters off a juicy soft target like a transport truck, no joy, it's probably not damaged. The end result is need to use far more rockets than you should need to destroy soft targets. They really don't have an advantage against soft targets compared to any of the AC cannons currently in DCS. Against hard targets that a 20mm might not do damage to a Hydra can potentially cause a kill with a direct hit. Zuni's are accurate enough to shoot a single rockets and get direct hits with, negating much of the issue with the lack of fragmentation. TL/DR: Zuni vs Hyrda- WAY bigger rocket, with WAY bigger warhead, WAY more accurate Hydra vs Zuni- Carry WAY more of them (but you'll need them as they lack all the above) -Jenrick
  9. One thing I found helped with that is to change where you're creating your missions. Give your self some good terrain to work with, buildings and valleys to hide in. I was setting my stuff up in open country where I really didn't have a lot of places to hide. That changed things dramatically for me. -Jenrick
  10. I know that awhile back the sight was changed to have more similarity to the Thales T100. Currently the piper on the sight is at the identical elevation for both weapon systems, thus giving the rockets an approximate aim point of .6 NM or about 1100 meters when fired from level flight at about 450 m, or in a shallow dive from that altitude. The US FM 3-04.140(FM 1-140) Helicopter Gunnery, details the 70mm Hydra rocket as being most effective between 3000m-5000m. The SNEB is slower then the Hyrda at burnout, but I'd guess the optimum engagement distance is still probably longer then 1000 m. Ideally the sight would be adjustable, just as the actual T100 is, allowing for adjustments based on the observed impact of the rounds. I get that this is probably a little more functionality then you'd probably like to spend time putting into the sight itself. At a minimum would it be possible to have the rocket reticle shifted so that the center point is the 3000m impact point? Thanks!
  11. I'd imagine the countries used an ATC model similar to the US, meaning QNH, and the pilots had charts with field elevations (or just asked the controller). -Jenrick
  12. I'm not using 2.5, but normally that error shows up when you have more then 2 types of weapons loaded. No clue is that's a real world limitation of the stores system (I haven't bothered to research it). Try it with just rockets and see if you get the same issue. -Jenrick
  13. IIRC default controls is LALT+L, move your view point around and the beam comes on after that. -Jenrick
  14. IIRC that's on hold until ED/DCS gets incendiaries in general. it's on the list, but not until ED adds it to the actual engine. -Jenrick
  15. IIRC currently com1 and com2 are swapped. Try dialing in on Com2.
  16. Jenrick

    Fuzes

    If you read the manuals, the Harrier doesn't actually allow for a ton of inflight changes to the weapons. Most are ground set, and also most of the weapons the Harrier carriers, don't have a lot of fuzing options to begin with. Regarding the MK20, PR is 1.2 second from release to burst, OP is 4 seconds. Yes dispersion is modeled for the rockeye based on burst height, altitude, etc. -Jenrick
  17. PR gives you a 1.2 second delay from release to burst. OP gives you a 4 second delay. DCS does model the field size based on when the rockeye bursts, ranging from a lawn dart of no burst to a giant very loose field. -Jenrick
  18. The ARBS however is, and IRL can generate an aimpoint from a way point or other INS location. I have no clue if it can hand off this data to TGP however. -Jenrick
  19. Use DMT-down (not aft) to speed up the slew rate. -Jenrick
  20. Hmm I'll check that out. -Jenrick
  21. If that's how the FLIR MFD image is generated.... Add your white marker carat's there in a nice simple 2D display, and then that's copied to the FLIR HUD display. For day time use, simply have the FLIR overlay only bring over the carats. Am I missing something? The FLIR image in the MFD is identical to the one projected on the HUD correct? I'm also not trying to be argumentative here, it's a lot different solving the a problem when you don't have the tech specs in front of you for what you're trying to work with. -Jenrick
  22. I'll start out with this: I get what you're saying to have the carat's appear on the HUD in the normal manner. I have to have them linked to a point in space so engine can put them on the HUD glass so they show up in the correct spot regardless of the eye point in the engine that the scene is being rendered with so that the engine can render the white V. Fair enough. However I'm saying this COULD be done as a post effect. The engine at some point in it's workings finalizes the list of polys/triangles that are going to be draw in the given scene versus clipped out. Correct? So we now have a list of all the polys/triangles to be drawn. Hell we can even bound the area that is the HUD or FLIR MFD, and if those aren't included, then we get to skip all this rigmarole as we'll never need to check it. Also if you really want to get in the weeds we actually have the full listing of pixels that are going to be displayed in XY format with a color/brightness value in the VRAM too. I will clarify, that was in reference to what is within the FOV of the sensor display, again this is a post effect. Yes, and my numbers are based on the primitives used for longer LOD distances (I'm assuming a rectangular building in DCS at the edge of visibility isn't using 25K vertices for instance), not a HD close range model. Also yes I know pixels are based on screen size, Dell's newest 5K HD display tops out at about 14.7 million pixels. I simply choose to list polys/triangles/vertices/pixels to cover all the bases on however the engine chooses to look at what it's rendering. Also remember we've already dropped anything under the scene average brightness, and whatever percentage is below the "contrast cutoff". On a true random distribution of values from 0-255 (black to white), yes we'd have about 37.5% of the poly/triangles/pixels left to evaluate assuming a 25% contrast ratio which is WAY low. In general the majority of the rendered scenery isn't going to be anything close to a random distribution, and you're probably going to want 70%-80% contrast at a minimum. Partially correct, partially incorrect. Yes to render the scene in beautiful fluid 3D, all the above has to occur. To go in an after all that is buffered and is waiting to be pushed down the pipeline; check a portion of the given frame in the manner I've described; and stick a couple of white pixels forming a "V" on top of certain pixels, non of that is required. -Jenrick
  23. I think that's where we're kind of talking across each other. Yes I understand that the graphics engine/render does this all in 3D. However what I'm discussing is extending the engines functionality, or simply doing this as a post rendering process. Now I'm not sure specifically how DCS engine is written, what areas are easy to dig down into etc. For example I could very easily write a simplistic program that ran over the top of DCS, and in a particular part of the display it would convert everything to B/W, and go through and sort pixels as I discussed. I could do this at a very high level using plug and play components to do this for me. I could go WAY down and actually literally look at individual pixels as they are rastered onto the display. I could decide I'm looking for individual pixels, I could look for 1" blobs. Overall it's a very simple thing to do. I could make whatever I decide I'm looking for (let say 1/4" blobs or some reasonably close to approximation for your monitor resolution) turn bright red. That's all we need with the hot spot detector. Hell Photoshop can do it for me frame by frame, take a screen shot, send it PS to auto-process, resend it back to DCS, and have it display. Computationally unfeasible? Absolutely, however the principle is identical. I get the engine probably doesn't support this type of thing as native functionality. Would it be a HUGE kludge to have an overlay just pinned to the 4 corners of the FLIR display (both hud and MFD), and scale and squeeze as the angle of view changes, oh heck yes. However it would work. Would it be an elegant solution? No. Would it be a good solution? No. But it would be a solution. In this case you're asking me to get my Kerbals to the Mun and only use basic parts; not get to light speed. Sure it can be done, but it just ain't gonna be pretty. I do understand that ED probably doesn't want/need that sort of kludge stacked on top of their rendered scene. I get that it'd be a decent bit of hassle for a specific sensor in a specific aircraft. However the question was asked HOW, not if it's a good idea. -Jenrick
  24. Excellent point. However temperature then requires some kind of IR modeling that uses that number to generate value for the given type of IR detection being used. This could be very computationally intense. As video cards get better and gain more memory, various IR textures could accomplish the same thing without the processing overhead. To skip ahead to your last point though, having one of the cores simply do IR processing could solve this problem though! Absolutely agree. The only thing that matters is the literal 2 dimension display of what the sensor is seeing. This simplifies the problem considerably. Amen! -Jenrick
  25. So the simplest option if everything in game had an IR map would be to just emulate exactly how the hot spot detector works in real life. I'm sure there's a tech manual out there somewhere ;) Just kidding. Without knowing specifically how the hot spot detector works, lets make a couple of conjectures: 1) It probably use some kind of filtering to help eliminate false positives, this could be something as simple as early AIM-9 IR filters, which physically blocked certain IR waveslengths that were only found in flares, to something way more complex. Probably the later. 2) The system is probably designed to look for something that has a contrast from the background IR levels. This is basically a given I'd say. This is probably one of the settings that can be tweaked. 3) The system may look for certain sizes and shapes of IR reflections to indicate potential signatures. A single flare in the middle of a Siberian filed in winter, is certainly a hot spot, but is it something the pilot will care about? A refractory tower while stupid hot, is probably too hot to be something the pilot is looking for (a burning vehicle would be the closest though it could be a launch signature too I suppose). This is probably one of the settings that can be tweaked. 4) The system is primarily of use and setup to help locate difficult to see/hidden targets. Rather then indicating that, yes, there is a giant plume of hot exhaust coming from a power plant. 5) Based on the posted videos, it does evaluates and discards/adds hot spots continuously (not really a conjecture unless we're all misinterpreting the videos above). Now if we wanted full fidelity we'd simply design the system to take all the above into account, and basically reverse engineer the hot spot detector. Problem solved! However that's not an efficient use of design time, way over engineered, and the DOD/MOD might get REALLY bothered by someone doing that. Assuming we have IR textures for all in game assets, we can start of with simple contrast detection. We can automatically parse the list of possible candidates by simply discarding any polygon/triangle/pixel (depending on exactly how the engine handles this, could be all three or just 1) of less then a certain cutoff brightness (I'm going WHOT here). This cutoff point could be dynamically generated per refresh of the sensor based on the average IR brightness of the whole scene. Now I can determine if a given polygon/triangle/pixel has a brightness that is higher then what it's adjacent to (ie contrast). Anything that doesn't generate contrast is generated from the list. This could honestly be cheated pretty effectively by saying only candidates that are greater then X% above average are "contrasty". Sure we could loose some that actually have perfectly fine contrast, but in general it'd work about right and do it quickly. We now have a list of possible "Hot Spot" candidates, each of which has been vetted as being above the average IR energy of everything in the FOV (ie the "cutoff"). If we have an open field with 1 tank in it, we probably only have the 1 hot spot candidate. Someone will complain that they want some additional hot spots. My question is why? In the real world if I have the same set up bare dirt, and a running tank, I'd have to really have the sensitivity dialed up to throw up more false positives. This would be very counter productive to actually finding and killing things. So is it possible, sure, but is highly unlikely (for instance in the weather selection I can't set it to rain frogs, or have plagues of locust, which if we want 100% fidelity should be addressed). Lets take a look at a more common case though, which would be, some buildings, a little water, and a few vehicles. Now we may have lets say a hundred or so polygons/a thousand or so triangles/20-30,000 pixels (again based on how the render engine is able to break things out, and I'm only referring to those being drawn by the engine) that are candidates by virtue of being above the cutoff. How do we sort through all that? In principle, we're hunting for vehicles, not buildings. We don't need notification on large targets, yes Virginia the power plant exhaust is hot, but it's way too big to be a tank at any reasonable distance. Will this remove a tank from being a hot spot candidate when it fills 90% of my HUD FOV? Sure but you have bigger problems then (I'm pretty sure you're aware there's a tank there as right before you slam into it). So we simply say a poly above a certain dimension/so many contiguous triangle/so many contiguous pixels are all removed from the list. Pixels would be the most computationally intensive to determine, but still not at at any measurable increase in overhead to figure out the bounds of any given group of pixels. Now we have a list of hot spot candidates that are above the cutoff, in size bounds, and have contrast. What we have left should be a fairly reasonably sized list of lets say 25 possible hot spots polys/triangles/pixel groups. In the real world how this list gets curated is where the engineers make their money, and the DOD slaps classification levels on everything. For a "feels about right" method we can easily fudge a few things. 1) Vehicle IR textures are the only ones using a true #FF/255/however the colors are listed (pure white). We build in a away to automatically determine what is a vehicle. 2) Shiny reflective things have the next brightest value (simulating a solar reflection), power plant stacks the next highest, etc. Basically we cheat and bias the system towards showing vehicles. Now take that list of 25 candidates (so we'll display a bit under half of them as hot spots), take the value of the brightest one (lets say an actual vehicle), anything within 10% of that actual value (so in this case E6/230/whatever), and throw those in a list. If we have more then 10 (or whatever number of hot spots are selected to be shown), randomly select 10 and put them out there. If we have less then display the 10 percent list, and just sequentially go down the list and put up markers. You will certainly get false positives, you will certainly miss real tracks at distance, and in general you will get a fairly reasonable hot spot tracker that's not god like. Run a hot spot update over the course of every 1/4-1/2 second seems about right. You may see more flickering and jumping then in the real videos if you have a ton of similar IR values, but overall I think most of the time it'd work pretty close. TL/DR If we have static IR maps (gradient black to white #00-FF/0-255, WHOT) of everything in game: 1) Determine average brightness of everything in the FOV of the sensor. 2) Anything that's below the average brightness is tossed. 3) Anything that's left must be above a certain limit to generate contrast against the background (user selectable or fixed, either way same idea), if it's not it gets tossed. 4) What's left, is checked to see if it's too big (user selectable or fixed, either way same idea). If it is, it gets tossed. 5) What's left is checked to see if it's too small (user selectable or fixed, either way same idea). if it is, it gets tossed. 6) What's left is rank ordered by brightness. 7) Take the top ten percent and randomly select ten (or whatever the setting for the number to display is) of them and show them as hot spots. If the top ten percent is less then the number of hot spots to display, then just display the top 10 brightest items as hot spots. -Jenrick
×
×
  • Create New...