Jump to content

Recommended Posts

Posted (edited)

Hello! 

After I watched this a awesome introduction:

https://youtu.be/IvDa0TCpDx0

 

 

I am hyped about the upcoming Module as a whole... But I also realized that it needs a very capable second person to either fly the helo or manage the weapons. Not everyone is able to, or wants to, fly with other people online. So the AI will take this part... I had to think about Jester and imagined him sitting in the Hind... I just cant imagine that this will work properly. In terms of jester, he is still not able to do anything regarding ground attack and has major flaws in air to air (alltough I think it is very vell developed, it is just a very big task for AI) 

 

I just have the fear it is not doable in DCS at this time... Maybe I am too much pessimistic, I dont know. What do you think?

 

 

Edited by Rhinozherous

i7-14700KF 5.6GHz Water Cooled /// ZOTAC RTX 4070 TI Super 16GB /// 32GB RAM DDR5 /// Win11 /// SSDs only

DCS - XP12 - MSFS2020

Posted

Despite how much I love Heatblur, and I'm impressed with their work, Jester will never be good or realistic enough because it's an AI. The RIO should handle the comms (but it's a crew contract, it can be changed), he should be proactive in the usage of radar, and he in control of the intercept: can you image the AI giving orders to the human pilot and yell at them when they don't follow the orders? Besides the initial amusement, most players will just disable Jester and use a workaround.

 

The AI for the Hind should be simpler as, besides the ATGMs, the Pilot can employ gun and rockets. The AI for the Hind does not need to do much as gunner, mostly handle the ATGM and spot for targets / threats I suppose, or follow a flighplan as pilot, perhaps with some changes or unplanned commands (such set move to a pre-defined BP or IP). This does not mean that implementing it is an easy task.

Imo the hardest part to do correctly is the GUI: imagine there's a dozen targets in front of you. How do you tell the gunner to launch an ATGM to a specific target? If the gunner is human is easy, a basic talk-on would do, but with the AI? I'm looking forward to seeing what ED has invented to solve this kind of issues.

  • Like 2
  • Thanks 1
full_tiny.pngfull_tiny.png
full_tiny.png

"Cogito, ergo RIO"
Virtual Backseaters Volume I: F-14 Radar Intercept Officer - Fifth Public Draft
Virtual Backseaters Volume II: F-4E Weapon Systems Officer - Scrapped

Phantom Articles: Air-to-Air and APQ-120 | F-4E Must-know manoevure: SYNC-Z-TURN

Posted

New modules is what pushes new tech in DCS.  It doesn't look possible, because ED usually doesn't do it until they absolutely have to.

 

Hoping for good results, as we REALLY need an increase in AI capability in this sim.

  • Like 2
Posted
11 hours ago, Karon said:

Despite how much I love Heatblur, and I'm impressed with their work, Jester will never be good or realistic enough because it's an AI. The RIO should handle the comms (but it's a crew contract, it can be changed), he should be proactive in the usage of radar, and he in control of the intercept: can you image the AI giving orders to the human pilot and yell at them when they don't follow the orders? Besides the initial amusement, most players will just disable Jester and use a workaround.

 

The AI for the Hind should be simpler as, besides the ATGMs, the Pilot can employ gun and rockets. The AI for the Hind does not need to do much as gunner, mostly handle the ATGM and spot for targets / threats I suppose, or follow a flighplan as pilot, perhaps with some changes or unplanned commands (such set move to a pre-defined BP or IP). This does not mean that implementing it is an easy task.

Imo the hardest part to do correctly is the GUI: imagine there's a dozen targets in front of you. How do you tell the gunner to launch an ATGM to a specific target? If the gunner is human is easy, a basic talk-on would do, but with the AI? I'm looking forward to seeing what ED has invented to solve this kind of issues.

 

I suppose most games do this by visually labelling stuff: displaying a big green box around Petrovich's selected target, and a button or two to cycle petrovich's targets.

But I also suppose that is not what simmers want, as it puts a graphic element in the world that a real pilot would not see. 

 

  • Like 1
Posted (edited)
vor einer Stunde schrieb malcheus:

 

I suppose most games do this by visually labelling stuff: displaying a big green box around Petrovich's selected target, and a button or two to cycle petrovich's targets.

But I also suppose that is not what simmers want, as it puts a graphic element in the world that a real pilot would not see. 

 

I don't think most simmers will have a problem with graphical overlays if it is not mixed up into the real systems. I can image something like this being accaptable for most players:

 

- Open Petrovich menu

- select something like "attack targets at"

- visual overlay with a cursor (or HMCS like pointing) pops up and you can select a target region that gets visually marked.

-A menu lets you add some kind of target classification that the gunner should priorize and the size of the search area

- after confirmation the pilot gives a short order to the gunner that roughly describes the position of the selected area and target type

- the gunner confirms and engages targets as selected if he can see them

 

Point here is: 

- due to area selection targets won't get auto-highlighted since it can be abused for spotting.

- The visual selection overlay is decoupled from the aircraft sensors and not much more of an immersion killer than the jester pop up menu

- the short audio orders of the pilot to the gunner and the resulting delay in execution will probably add to the immersion/realism. 

Edited by Wychmaster
  • Like 1
Posted
13 hours ago, Rhinozherous said:

But I also realized that it needs a very capable second person to either fly the helo or manage the weapons.

 

P is far easier to make than a V, where the WSO would have been required to be spotting, aiming and firing a gun at the targets around your frontal hemisphere. And do that by analyzing the target type in proper time (search specific targets or recognize specific type among others) and engage it. 

 

In P it is easier as AI doesn't need to care about the rotating gun. And anyways such is already in game in Mi-24V as AI unit (and all units). 

What comes to flying, we need AI that would be better at low altitudes. You can't have AI flying a helicopter at 80 meter altitude like now because they don't dare to go lower and actually try to find the spots to engaged targets. But KA-50, SA342 Gazelle, OH-58 Kiowa and AH-64 requires this more than a Mi-24 pilot that is more about how current AI flies.

 

For almost two years I have been flying in formation with Mi-24V as AI, and got custom to many of its flaws but as well surprising good things.

 

 

13 hours ago, Rhinozherous said:

Not everyone is able to, or wants to, fly with other people online. So the AI will take this part...

 

The most annoying part will be the relationship between AI and player in spotting.

As that makes big difference between flying co-op and flying alone. 

 

Where in Co-Op with other player you are talking to other about spotted things, the AI is like "Target 50 km East!" and you are like "??????"

Or best thing in co-op is that you can actually talk the other player on the target like "1'clock, about 1 km, right of the small tree group, see it?"

Where with AI you are "How I can tell you to look at there!?!?". 

 

 

 

13 hours ago, Rhinozherous said:

I had to think about Jester and imagined him sitting in the Hind... I just cant imagine that this will work properly. In terms of jester, he is still not able to do anything regarding ground attack and has major flaws in air to air (alltough I think it is very vell developed, it is just a very big task for AI)

 

One key things I think we should have, kind a cheat. Is like a green HMS circle in a Su-27S or MiG-29S that you could press button to see it, and then use that to point out to AI that where to fly and where to look. Just a generic 5-10 degree would be enough even for that. Just easily visually aim by giving heading like "20 degree to right" and AI would concentrate their search in that direction, or they would fly to that direction. 

 

Same way as you would with a player talk "Fly around that field" or "The target was somewhere there". 

Then have just a radio call list kind interface (but in pie-shape like no Jester has) to set target types to engage or altitude to fly or maneuver to perform. 

 

13 hours ago, Rhinozherous said:

I just have the fear it is not doable in DCS at this time... Maybe I am too much pessimistic, I dont know. What do you think?

 

I would never expect AI to be so good that you can fly the mission without trying to guide it to do something. Why I think that we seriously need assisting features to command the AI easily. 

I am not fond of the old games style to have keyboard commands "Go Left" or "Go Down" and "Target SAM" kind things, As you ended up a lot to micromanage the AI pilot to get basics done. 

 

Only so much can be done for AI as Close Air Support is especially dynamic element where you need to be able adapt to new threats and find new ways to fly to avoid getting shot down, or even worse - be somewhere else than delivering support in time. 

 

It would be nice to hear if the ED would program the Ai fly the real patterns, maneuvers and tactics, and you would get basic introduction for those and then have idea that what AI is going to do, or what it can do and then do so. 

 

Even more challenging will be the coordination with the ground units. As you would need to be constantly talking with the officer responsible to command units to attack and get the fire support on moment it is required. That is demanding part. Like how to get a AI on ground to talk to player in the aircraft, without them starting to scribble a GPS coordinates every and each time. As nothing is as stupid as "Enemy in GRID 3423 2231".  If the AI ground units would use smoke grenades (those small hand throwables, not those 300 meter tall infinite fuel ones) to mark their position, and then tell the enemy position from that as bullseye call "Enemy armor 120 degree from our location, about 500 meters". And once you see the smoke, you know from where to attack to where. 

 

  • Like 2

i7-8700k, 32GB 2666Mhz DDR4, 2x 2080S SLI 8GB, Oculus Rift S.

i7-8700k, 16GB 2666Mhz DDR4, 1080Ti 11GB, 27" 4K, 65" HDR 4K.

Posted
On 4/12/2021 at 11:32 AM, Fri13 said:

 

P is far easier to make than a V, where the WSO would have been required to be spotting, aiming and firing a gun at the targets around your frontal hemisphere. And do that by analyzing the target type in proper time (search specific targets or recognize specific type among others) and engage it. 

 

 

 

Thats not true, the AI in the P still needs to aim and control the sight for spotting and the ATGMS, so theres not a real difference if what you shoot with, its still the or mostly same system and it has to know how to use it

Posted
Just now, Badger1-1 said:

Thats not true, the AI in the P still needs to aim and control the sight for spotting and the ATGMS, so theres not a real difference if what you shoot with, its still the or mostly same system and it has to know how to use it

 

The AI in the P needs to only use the ATGM. Sure it does spotting without the scope, but it is not required at the same time period to use 3/10x sight to engage targets as it would be with the YakB sight at the close ranges when you perform overflies and passes in tight small areas.

 

As the AI needs to perform different roles how and when it does spotting.

- Naked eye it has widest field or view and capability to scan differently the close ranges areas and focus to different target types.

- A YakB sight for aiming targets at 120 degree FOV area and point it quickly around from target to target and handle it by possibly correcting aiming when starting firing. 

- The same optical 3/10x sight in P as in V for scanning areas at long ranges, indicating target positions from spotting them with naked eye first so pilot can find them easily.

 

Now you remove the YakB element completely in the P variant. There is only the naked eye spotting and the optical sight utilization.

1/3 of the required work was eliminated with that simple decision. 

 

As right now hen searching and spotting targets with naked eye, the WSO has two decisions. Either do nothing and just report it to the pilot. Or the WSO reports and starts using a optical sight if range is matching. The third option to make a decision that to use a ATGM or YakB is gone. There is no need to think about what to shoot when only pilot can actually shoot with the cannon or rockets so it is just pointing out where target is or marking it with sight.

 

Example 1: if you fly and pop-up over hill to enemy position ahead at 1-2 km distance, there are few trucks and maybe APC.

The WSO can't do anything about that as you are not going to waste a ATGM on such targets that you could have taken out with YakB effectively while pilot can try to use rockets to destroy them as well. Now it is just pilot aiming with whole helicopter to take them out with 30 mm cannon or rockets. Sure the pilot wouldn't have 30 mm cannon but YakB would have been enough, and WSO would had easier time to aim and shoot the targets from the spotting moment all the way overfly position when the gun gimbal can't anymore reach them. 

 

Example 2: If you pop-up opposite side of the large open crop fields and at 4-5 km is a similar setup, it is more about rockets and 30 mm cannon than it would have been with YakB until reaching that 1-2 km range to engage them effectively with YakB. Now you can do it from 2-3 km range with the 30 mm cannon or rockets. 

 

Example 3: The same scenario but target area is a platoon with APC's and ATGM or AAA vehicle. Now you can start considering ATGM and rockets being primary, while 30 mm cannon as secondary. You don't want to go for a shooting competition with a ZSU-23-4 with 30 mm cannon as it is superior to you even with 23 mm cannons. 

 

Example 4: You have enemy position in a edge of tight open field between forests, own troops are 150-200 meters from them in opposite forest and you need to support them, but near by there are MANPADS and AAA for air cover, so you need to stay very low and fly from only safe direction. Now you have very limited opportunities to engage targets as you can't get altitude, so you need to come fast and low and have couple seconds to shoot at them. Flying toward them is risky as you will take fire, but performing a turning approach or near pass makes it safer for you but you can still use YakB (while still rockets are most preferred option to saturate large areas), but that is again co-op with pilot using rockets and WSO using YakB, where 30 mm cannon and ATGM are more useless. 

 

When AI doesn't need to utilize a one different weapon, it makes it easier to be programmed. As it is simple as that when WSO can't use ATGM, it just does naked eye spotting and nothing else. 

i7-8700k, 32GB 2666Mhz DDR4, 2x 2080S SLI 8GB, Oculus Rift S.

i7-8700k, 16GB 2666Mhz DDR4, 1080Ti 11GB, 27" 4K, 65" HDR 4K.

Posted
4 minutes ago, Badger1-1 said:

Thats not true, the AI in the P still needs to aim and control the sight for spotting and the ATGMS, so theres not a real difference if what you shoot with, its still the or mostly same system and it has to know how to use it

I tend to disagree. The difference between guiding an ATGM and a 12.7mm MG is huge imo. In primis it's not a single device, it's two of them. In fact, the piloting gauges and whatnot are placed on the left side to make room for the MG aiming and firing equipment, so there is a spatial offset that should be taken into account. I'm not an expert, I don't know the name but google the Mi 24P and V front cockpits and you see what I mean.

Then, an MG is not a precise weapon, especially fired from several km whereas the ATGM is. Its task is suppression, especially for a low calibre weapon, so the firing pattern and the goal is different. The aiming procedure is also not the same, as the ATGM wants to go for the centre of mass every time, but the MG doesn't. The latter also has to compensate for the movement of the target (lead) and the helicopter. The ATGM goes pure only.

All of this without mentioning the different set of valid targets, the capabilities of optics and gimbal limits. At the end of the day, the point in common is the fact that the AI is slewing something around, but this can be said for the turrets of a Ju-88 as well.

full_tiny.pngfull_tiny.png
full_tiny.png

"Cogito, ergo RIO"
Virtual Backseaters Volume I: F-14 Radar Intercept Officer - Fifth Public Draft
Virtual Backseaters Volume II: F-4E Weapon Systems Officer - Scrapped

Phantom Articles: Air-to-Air and APQ-120 | F-4E Must-know manoevure: SYNC-Z-TURN

Posted

ED appears to be working on voice control for both Petrovich and ATC. They alluded to it in one of the teasers leading up to the 2.7 release. 

 

TBH, I can see Jester improving due to this, too, both with Petrovich's "brain" and the voice interaction system.

Posted (edited)

About the AI... just wish that the chopper AI will at least be able to navigate thru mountains without crashing to the mountains side. This is one of the main issue I have with the Huey. I love flying in formation, but the AI wingman tend to crash a lot...

On 4/11/2021 at 3:57 PM, Morrov said:

Read this interview to get some insight on the AI they're building for the Hind.
It won't be fully ready for EA, but it is very promising.

 

Beside, that interview was gold! Wonder how it did not pop up that much. A lot of information about other modules as well. Thanks for sharing!

Edited by Frag
  • Like 1
Posted
On 4/15/2021 at 2:41 PM, Karon said:

Then, an MG is not a precise weapon, especially fired from several km whereas the ATGM is. Its task is suppression, especially for a low calibre weapon, so the firing pattern and the goal is different. The aiming procedure is also not the same, as the ATGM wants to go for the centre of mass every time, but the MG doesn't. The latter also has to compensate for the movement of the target (lead) and the helicopter. The ATGM goes pure only.

 

The targeting system in the V for WSO to use YakB has a automatic ballistic calculation, similar way as the pilot has in Mi-24P for the gun and rockets. So you get the correction calculation for YakB. I am not 100% sure about the target lead calculation, but it should be there as well when firing, because so many sources says that in case of malfunction you need to perform lead calculation by yourself as well because the sight turning (yaw and pitch) doesn't generate the required target movement information. The difference is that where pilot has a fixed weapons (rockets or 30 mm cannon) the reticle needs to move in gunsight to show where calculated impact is. Where the WSO in Mi-24V would have fixed gunsight and the gun will take a proper correction to hit the aimed point.  But the problems are as any CCIP mode without exact target range information (based to elevation) you are required to make some adjustments, unlike with the ATGM that does like you say "pure only". 

  • Like 1

i7-8700k, 32GB 2666Mhz DDR4, 2x 2080S SLI 8GB, Oculus Rift S.

i7-8700k, 16GB 2666Mhz DDR4, 1080Ti 11GB, 27" 4K, 65" HDR 4K.

Posted
14 hours ago, Fri13 said:

The targeting system in the V for WSO to use YakB has a automatic ballistic calculation, similar way as the pilot has in Mi-24P for the gun and rockets. So you get the correction calculation for YakB. I am not 100% sure about the target lead calculation, but it should be there as well when firing, because so many sources says that in case of malfunction you need to perform lead calculation by yourself as well because the sight turning (yaw and pitch) doesn't generate the required target movement information.

 

How does the gunsight (and the ASP-17 which applies to our P), get ranging information? Does it just use barometric altitude (QFE in this case)/RADAR altitude and just do trigonometry? Kinda like the Viggen? I didn't think the Mi-24V/P had a laser rangefinder.

 

  • Like 2

Modules I own: F-14A/B, F-4E, Mi-24P, AJS 37, AV-8B N/A, F-5E-3, MiG-21bis, F-16CM, F/A-18C, Supercarrier, Mi-8MTV2, UH-1H, Mirage 2000C, FC3, MiG-15bis, Ka-50, A-10C (+ A-10C II), P-47D, P-51D, C-101, Yak-52, WWII Assets, CA, NS430, Hawk.

Terrains I own: South Atlantic, Syria, The Channel, SoH/PG, Marianas.

System:

GIGABYTE B650 AORUS ELITE AX, AMD Ryzen 5 7600, Corsair Vengeance DDR5-5200 32 GB, NVIDIA GeForce RTX 4070S FE, Western Digital Black SN850X 1 TB (DCS dedicated) & 2 TB NVMe SSDs, Corsair RM850X 850 W, NZXT H7 Flow, MSI G274CV.

Peripherals: VKB Gunfighter Mk.II w. MCG Pro, MFG Crosswind V3 Graphite, Logitech Extreme 3D Pro.

Posted (edited)
10 minutes ago, Northstar98 said:

 

How does the gunsight (and the ASP-17 which applies to our P), get ranging information? Does it just use barometric altitude (QFE in this case)/RADAR altitude and just do trigonometry? Kinda like the Viggen? I didn't think the Mi-24V/P had a laser rangefinder.

 

They do that trigonometry. There were few units with laser rangefinders in Afghanistan but it was not generic one like we have Mi-24P presented (as the Mi8Pilot said in interview that they model the most generic features instead all that some units had). 

 

So it will be very interesting to engage targets when their altitude is different than yours. So simply "shoot" and then correct your aim based what was the error and try to hit the target "from hip". Or think about cases where targets are in the canyon and you come from top of the ridge or mid way of the hill... Or you engage someone at top of the hill and you are flying over terrain that is 100 meters below their position. 

 

Edit: Forgot. The lead developer talked something about "placing crosshair on target" and then "wait a moment and then fly to put the crosshair on cross and shoot when minimal difference" or somethin. I didn't understand that what it meant but it was something about firing. Maybe there is some kind a range calculation between the WSO sight and the pilot so that if WSO maintains angle to the target and pilot maneuvers to make the difference for it that system could get some kind estimation of the slant range?

 

Edit 2: Here is the part I didn't understand, as it is said that there is no other ranging method than visual guessing:

 

Q: How does the gunsight work?
A: The cross on the gunsight shows where the operator is looking on the URS setting. If you switch to the fixed cannon, the cross shows the computations from the air data computer and tells you where will rounds land if you fire right now. The pilot initially places the fixed crosshairs on the target and waits for the mobile one to move. He then gradually straightens out the helicopter. When the angular offset between the mobile and fixed pippers is at its smallets, the pilot carefully moves to align both and only then opens fire. This applies to both cannons and rockets. There’s a manual mode where you add corrections with the dials, but pilots don’t use it often. Traditionally you just fire, watch where it lands and correct accordingly as that’s simpler. The pilot chooses the weapon on the dial for the appropriate sight calculations.

 

Edited by Fri13
  • Like 1
  • Thanks 1

i7-8700k, 32GB 2666Mhz DDR4, 2x 2080S SLI 8GB, Oculus Rift S.

i7-8700k, 16GB 2666Mhz DDR4, 1080Ti 11GB, 27" 4K, 65" HDR 4K.

Posted

I think it was referred to how the WSO can guide the pilot onto the target using the indicator in the gunsight. IRL, for attacks in rough terrain you would need to find the QFE of the target and use barometric altitude.

  • Like 1
Posted

Almost sounds like an optical ranging system.  I guess we'll find out on release.

[sIGPIC][/sIGPIC]

i7 10700K OC 5.1GHZ / 500GB SSD & 1TB M:2 & 4TB HDD / MSI Gaming MB / GTX 1080 / 32GB RAM / Win 10 / TrackIR 4 Pro / CH Pedals / TM Warthog

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...