Jump to content

Interesting interview with Mike "Pako" Benitez about USAF and DARPA AI program. Especially how hopeless it is for a human to try to defeat AI in a dogfight.


bies

Recommended Posts

  • bies changed the title to Interesting interview with Mike "Pako" Benitez about USAF and DARPA AI program. Especially how hopeless it is for a human to try to defeat AI in a dogfight.

AI in a simulated environment is not the same as AI in the wild. It's entirely possible to make an AI that is for practical purposes unbeatable in what is still effectively a video game. Thus far, the ''AI fighter pilots'' I've seen, like that one a year or so ago with the bs ''hitscan'' mechanics, was very simplistic.

Де вороги, знайдуться козаки їх перемогти.

5800x3d * 3090 * 64gb * Reverb G2

Link to comment
Share on other sites

Yup, the hard part was always the environment. One of the hardest tasks for robots is seeing and interpreting what they see. They can manage in a lab and on a clearly marked factory floor, but put them in an environment when the enemy actively tries to make their job harder, and they're not so great. If the AI can't properly recognize and classify the target it's engaging, you can forget about it winning anything.

Plus, there's the small problem of an AI being inherently a black box, meaning its reactions can't be fully predicted. IFF is a hard enough problem for human pilots, if the AI somehow misidentifies an airliner as a MiG, you not only have a war crime, or a blue on blue on your hands, there's no real way of making sure it doesn't happen again (other than immediately junking all combat AIs), no matter how many people in charge get thrown to the wolves. Dilution of responsibility is the real risk of AI, and we already see it happening with how generative AIs scrape content off the internet. When it comes to weapons, there needs to be a clearly defined person who answers for whatever damage the weapon does. Avoiding war crimes don't seem to be on anyone's list these days, but there's also a matter of accidently hitting friendly or neutral troops, and that's something to consider, too.

  • Like 1
Link to comment
Share on other sites

what i found most interesting from the interview is essentially an admission that "we need to throw out all the rules, because our enemies are not going to play by them."

gunpowder took centuries before it fully replaced the sword, but it was nevertheless inevitable. i would be cautious of being dismissive of these new ai powered capabilities.


Edited by probad
  • Like 1
Link to comment
Share on other sites

I don't know if you realize that "we need to throw out all the rules, because our enemies are not going to play by them" is a really friggin' dangerous sentiment. Rules is what kept nukes from obliterating the world's major countries, exactly because both "us" and "our enemies" had played by the rules, and there is a modicum of trust that they will continue to do so, mostly because anyone who doesn't is more or less assured to go down with their target. Autonomous AI is closer to nukes than to gunpowder, in terms of destructive potential, mostly because it would remove the direct responsibility from the person authorizing the use of weapons. You can't court-martial a bot, even if it outright violates orders, but at the same time it's not really possible to program an AI that you can be sure will follow orders 100% of the time. What do you do when an AI slaughters a bunch of friendly troops on its own? Also, I guarantee that as the AI develops, so will anti-AI countermeasures, including ones that attempt to subvert it, maybe even outright causing this exact scenario. Who is responsible then?

The problem with AI is that it's inherently unpredictable, and therefore should be considered indiscriminate. Realistically, an officer who orders AI deployed to an area with civilians should, at that point, be guilty of a war crime, just like one who mines a civilian village. It shouldn't matter whether the bot actually kills a civilian or not, just like it doesn't matter if the village was demined without killing anyone. Ideally, this would be clarified early enough that military AI development would go in the direction of augmenting humans rather than replacing them. This is the only responsible way of going about it, and it could be enforced just like the agreement not to nuke each other. Luckily, there seems to be a panic around AI similar to the one around nukes, so this sort of treaty might yet happen. 

If you want to see the example of what "throwing out all the rules" leads to, look no further than Russians in Ukraine. And even they didn't nuke the place when they got a bloody nose, so evidently, not all rules were abandoned. They probably thought that Ukrainians won't bother with rules either, and that they'd get away with it like the US did in Iraq. Oddly enough, it worked out for them much less well than the MAD-induced nuclear standoff with NATO that's been going on since Cold War. Instead of throwing rules out, I'd advocate making them, and giving the enemy a good incentive to stick to them, as well.

  • Like 1
Link to comment
Share on other sites

i know its dangerous, that's why i think it's interesting. 

you can harp on about how the world order we grew up with is under threat of getting violated left and right but those rules you're talking about? they are only going to be as good as your ability to enforce them. if this genie can't be kept in the bottle you damn well better have the biggest and baddest genie in the room.

you mention nukes, well its exactly why we need the most nukes even if we never want to have to use them. it doesn't take much to realize we've already been comfortably living with monsters for some time now -- what's one more? yeah i'm scared of this development and all that it implies but i just think the best way to face it is to tackle it before it tackles you.


Edited by probad
Link to comment
Share on other sites

Here's the thing: you don't have to enforce a ban on combat AI with combat AI. You can use nukes, anti-AI countermeasures, and most of all, controlling the narrative. I know people love their Fallout quotes, but war had, in fact, changed. It's increasingly less decided on the battlefield, and more on the internet. If you control the narrative, you can make money materialize out of thin air, and you can get other countries to shore up your manufacturing. By conventional military wisdom, Ukraine should be in a much worse shape than it is now. If combat AI is made out to be "evil" in the minds of common people, then any side using it would harm its narrative. This could significantly slow down development of AI-based weapon systems.

Also, we need to start developing countermeasures ASAP, specifically targeting AI algorithms. AI is well known to be prone to halluciantions, and those things have been used on real systems, including fooling self-driving cars into dangerous maneuvers (such as "disappearing" a stop sign for the AI, with just a few random bits of tape). We have pretty effective means of fooling humans in that regard, and AI is universally worse at discerning things like that. AI won't be much of a threat if a guy in a ghillie suit (remember, those work even on humans to a degree) can just walk up to an autonomous tank across an open field and hit the off switch, with the tank being 100% convinced it's looking at shrubbery. Same for aircraft, if you can get it to load the "bag of tricks" for fighting, say, the Hornet when you're flying an F-16, you could probably get on top of the drone, because it'll use the wrong tactics. I suspect we might see a whole range of technologies such as dazzlers, active camo and radar jammers, aimed specifically at making AI make wrong decisions.

Link to comment
Share on other sites

  • 3 months later...
On 8/2/2023 at 8:38 AM, Dragon1-1 said:

Here's the thing: you don't have to enforce a ban on combat AI with combat AI. You can use nukes, anti-AI countermeasures, and most of all, controlling the narrative. I know people love their Fallout quotes, but war had, in fact, changed. It's increasingly less decided on the battlefield, and more on the internet. If you control the narrative, you can make money materialize out of thin air, and you can get other countries to shore up your manufacturing. By conventional military wisdom, Ukraine should be in a much worse shape than it is now. If combat AI is made out to be "evil" in the minds of common people, then any side using it would harm its narrative. This could significantly slow down development of AI-based weapon systems.

Also, we need to start developing countermeasures ASAP, specifically targeting AI algorithms. AI is well known to be prone to halluciantions, and those things have been used on real systems, including fooling self-driving cars into dangerous maneuvers (such as "disappearing" a stop sign for the AI, with just a few random bits of tape). We have pretty effective means of fooling humans in that regard, and AI is universally worse at discerning things like that. AI won't be much of a threat if a guy in a ghillie suit (remember, those work even on humans to a degree) can just walk up to an autonomous tank across an open field and hit the off switch, with the tank being 100% convinced it's looking at shrubbery. Same for aircraft, if you can get it to load the "bag of tricks" for fighting, say, the Hornet when you're flying an F-16, you could probably get on top of the drone, because it'll use the wrong tactics. I suspect we might see a whole range of technologies such as dazzlers, active camo and radar jammers, aimed specifically at making AI make wrong decisions.

 I'm a bit late, but like where you're going. U think that's interesting and agree. ''Countering AI'' will be less about somehow ''jamming/preventing'' it and more about tricking it into doing something stupid. Inflexibility will be a major handicap for any system. People underestimate the complexity of human decision making, like in your example of a ghillie, what is ''obviously'' a man in a suit to us is not ''obvious'' to a machine at all. It is not capable of intuitive ''leaps''.

Де вороги, знайдуться козаки їх перемогти.

5800x3d * 3090 * 64gb * Reverb G2

Link to comment
Share on other sites

I think this is why we won't actually see fully autonomous AI weapon systems for quite a while, if ever. Usefulness of any military system, AI or not, depends on how vulnerable it is to attack. If it turns out that making it hallucinate a target, or ignore a valid one, is too easy, then it will not be a useful military system. Also consider the context about narrative war - if your AI is easily baited into committing a war crime, this will lose you the war faster than any amount of guns and bombs (more so because of an inevitable "killer robots" narrative that will be lapped up by everyone). Humans are, unfortunately, far from immune to this, but they do better than AI. As recent events show, a war can be fought (and won!) entirely on narratives.

AI might still have military uses, for instance in radars. Today, we have relatively simple filters for rejecting spurious and unwanted radar returns. A limited purpose AI, trained on various radar pictures could, for instance, be able to determine whether that low RCS return is a bird or a Su-57. If this worked reliably, it could make stealth much less useful. Still, the decision whether to shoot or not will always have to be made by a human.

Link to comment
Share on other sites

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...