Jump to content

So, why can't the WMD-7 track beyond 20nm?


J20Stronk

Recommended Posts

I've looked everywhere, I even sifted through a good bit of the included CN manual for further information and specs, but there doesn't seem to be an explanation for it. This is the only TGP with this limitation - the pod just refuses to enter a tracking mode beyond that 20nm distance and remains aircraft stabilized. I assume it uses image stabilization techniques when in Area track and Point track modes, and inertial correlation in INR or SPI Slaved modes like other TGPs. Some have theorized previously that it probably requires laser ranging to get an accurate designation, but that still doesn't explain why it can't just self stabilize onto a point or area without creating a SPI, again all track modes would use either image or inertial processing to keep the pod stabilized, ranging is irrelevant.

 

Some input from the devs would be appreciated, thanks.

 

Also, a sort of follow up question about Area and Point Track modes: is it supposed to be this gamey? Like, the pod only ever enters Point track when it's a "live", placed vehicle or static object. Trying to point track a static map object\building, vehicle, or even civilian traffic will snap the pod toward it, but it stays in Area track. It's like it knows what is a "real" player/AI target and only Point tracks them and ignores the rest.


Edited by J20Stronk
  • Like 4
Link to comment
Share on other sites

There's equations that you can use to calculate how far a sensor can detect, track, and identify a target, I believe detect is around 2 pixels, track is more, and identify is even more than that. based on the FOV and the zoom, you can calculate the amount of pixels you would get, assuming the resolution of the camera is as advertised, then 20nm ends up being about the right limitation. Look at the Nyquist-Shannon sampling theorem.

Link to comment
Share on other sites

On 3/16/2024 at 5:59 AM, J20Stronk said:

Also, a sort of follow up question about Area and Point Track modes: is it supposed to be this gamey? Like, the pod only ever enters Point track when it's a "live", placed vehicle or static object. Trying to point track a static map object\building, vehicle, or even civilian traffic will snap the pod toward it, but it stays in Area track. It's like it knows what is a "real" player/AI target and only Point tracks them and ignores the rest.

This is a common issue with DCS modules, you can tell which targets are live or not by trying to lock them up with Mavericks, and IIRC the Shkval on the Su-25T, maybe the Ka-50 as well, do the same, only locking up on live targets.

  • Like 1
Link to comment
Share on other sites

Posted (edited)
On 3/20/2024 at 12:52 PM, Napillo said:

There's equations that you can use to calculate how far a sensor can detect, track, and identify a target, I believe detect is around 2 pixels, track is more, and identify is even more than that. based on the FOV and the zoom, you can calculate the amount of pixels you would get, assuming the resolution of the camera is as advertised, then 20nm ends up being about the right limitation. Look at the Nyquist-Shannon sampling theorem.

So then why do other targeting pods not have such limitations? The LITENING, ATFLIR, and Shkval, which have similar resolution, but especially the LANTIRN, which has significantly worse resolution, are all able to self stabilize beyond 20nm - whether by Area, Point or INR tracking.


Edited by J20Stronk
  • Like 1
Link to comment
Share on other sites

  • 2 weeks later...

Inertial lock doesn't require imaging. It just computes where it is looking based upon aircraft inertial reference, azimuth and elevation at the time it was locked. It can still range using laser. It will drift though (if it is implemented properly/realistically).

Area or target track require imaging, so this is more limited.

Motorola 68000 | 1 Mb | Debug port

"When performing a forced landing, fly the aircraft as far into the crash as possible." - Bob Hoover.

The JF-17 is not better than the F-16; it's different. It's how you fly that counts.

"An average aircraft with a skilled pilot, will out-perform the superior aircraft with an average pilot."

Link to comment
Share on other sites

7 hours ago, Tiger-II said:

Inertial lock doesn't require imaging. It just computes where it is looking based upon aircraft inertial reference, azimuth and elevation at the time it was locked. It can still range using laser. It will drift though (if it is implemented properly/realistically).

Area or target track require imaging, so this is more limited.

Right, so wouldn't we at least be able to move the pod around, stabilized in INR with minor drift? This is how it works on the DCS F-15E - without commanding a track using the AUTO Acq. Depress action, you'll be in CMPT/"INR" track. It's stabilized to a point derived by the pod and other sensors on the aircraft, such as the AG radar and INS, and it's still relatively stable unless you're maneuvering. This is also how it works on other planes like the Hog and Hornet, but the "drift" isn't modeled on them

 

Once you're in close, you can then enable the more precise AREA and POINT Track modes.

Link to comment
Share on other sites

I thought it did? I need to try it again.

Motorola 68000 | 1 Mb | Debug port

"When performing a forced landing, fly the aircraft as far into the crash as possible." - Bob Hoover.

The JF-17 is not better than the F-16; it's different. It's how you fly that counts.

"An average aircraft with a skilled pilot, will out-perform the superior aircraft with an average pilot."

Link to comment
Share on other sites

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...