Jump to content

'Killer robots' with AI must be banned, urge Stephen Hawking


Recommended Posts

Interesting read....

 

'Killer robots' with AI must be banned, urge Stephen Hawking, Noam Chomsky and thousands of others in open letter

 

 

More than 1,000 robotics experts and artificial intelligence (AI) researchers - including physicist Stephen Hawking, technologist Elon Musk, and philosopher Noam Chomsky - have signed an open letter calling for a ban on "offensive autonomous weapons", or as they are better known, 'killer robots'.

 

Other signatories include Apple co-founder Steve Wozniak, and hundreds of AI and robotics researcher from top-flight universities and laboratories worldwide.

 

The letter, put together by the Future of Life Institute, a group that works to mitigate "existential risks facing humanity", warns of the danger of starting a "military AI arms race".

 

These robotic weapons may include armed drones that can search for and kill certain people based on their programming, the next step from the current generation of drones, which are flown by humans who are often thousands of miles away from the warzone.

 

The letter says: "AI technology has reached a point where the deployment of such systems is - practically if not legally - feasible within years, not decades."

 

It adds that autonomous weapons "have been described as the third revolution in warfare, after gunpowder and nuclear arms".

 

It says that the Institute sees the "great potential [of AI] to benefit humanity in many ways", but believes the development of robotic weapons, which it said would prove useful to terrorists, brutal dictators, and those wishing to perpetrate ethnic cleansing, is not.

 

Such weapons do not yet truly exist, but the technology that would allow them to be used is not far away. Opponents, like the signatories to the letter, believe that by eliminating the risk of human deaths, robotic weapons (the technology for which will become cheap and ubiquitous in coming years), would lower the threshold for going to war - potentially making wars more common.

 

Last year, South Korea unveiled similar weapons - armed sentry robots, that are currently installed along the border with North Korea. Their cameras and heat sensors allow them to detect and track humans automatically, but the machines need a human operator to fire their weapons.

 

The letter also warns of the possible public image impact on peaceful the uses of AI, which potentially could bring significant benefit to humanity. By building robotic weapons, it warns that a public backlash could grow, curtailing the genuine benefits of AI.

 

It sounds very futuristic, but this field of technology is advancing at a rapid rate, and opposition to the violent use of AI is already growing.

 

The Campaign to Stop Killer Robots, a group formed in 2012 by a list of NGOs including Human Rights Watch, works to preemptively ban robotic weapons.

 

They are currently working to get the issue of robotic weapons on the table of the Convention of Conventional Weapons in Geneva, a UN-linked group that seeks to prohibit the use of certain conventional weapons such as landmines and laser weapons, which, like the Campaign hopes autonomous weapons will be, were preemptively banned in 1995.

 

The Campaign is trying to get the Convention to set up a group of governmental experts which would look into the issue, with the aim of having such weapons banned.

 

Earlier this year, the UK opposed a ban on killer robots at a UN conference, with a Foreign Office official telling The Guardian that they "do not see the need for a prohibition" of autonomous weapons, adding that the UK is not developing any such weapons.

 

 

http://www.independent.co.uk/life-style/gadgets-and-tech/news/killer-robots-with-ai-must-be-banned-urge-stephen-hawking-noam-chomsky-and-thousands-of-others-in-open-letter-10420169.html

  • Like 1

[sIGPIC][/sIGPIC]



104th Phoenix Wing Commander / Total Poser / Elitist / Hero / Chad

Link to comment
Share on other sites

Technological progress can't be stopped.

 

Judgement day is unavoidable.

FX-9370 4.83 GHz, 16 GB Gskill Trident X 2400 MHz, Asus ROG Crosshair V formula-Z, GTX1080FE, Corsair TX650W PSU, Rift, XL2411T 3D 144 Hz, Windows 10 Pro

Link to comment
Share on other sites

I often get the feeling some researchers mistake Terminator for a blueprint instead of a warning. :doh:

 

Here's to hoping they'll listen to some of the more influential people on the planet.

Just as much as 1984 is already reality now and remeber, we were always at war with Eurasia.
Link to comment
Share on other sites

What's the big deal, I mean if the machines rise up against us, all we have to do is go back in time and kill the person that created them, right? :P

 

On a serious note, I don't have a problem with automatic tracking and targeting, but I completely agree though, at the very least there should be always be one and preferably two human beings pressing the trigger simultaneously, so that the decision to kill is well thought through.

 

Having a machine decide whether or not to shoot is just insane.

 

I have to mention though, that I think the real threat will be in the next 20 years when nano bots start being used in warfare. A drone flying around engaging targets can be shot down by even unsophisticated weaponry by untrained people, and eventually it will run out of fuel and ammo. However with nano technology (especially if it is self replicating) that will be the end of the world. How would a normal person stop, or even avoid something they are unable to see or hear?

 

Anyway I'm off to do my morning 10 push ups that big brother is asking me to do.

---------------------------------------------------------------------------------

Cockpit Spectator Mode

Link to comment
Share on other sites

Threat analysis is a curious business.

 

AI that is well developed enough to be used for an autonomous weapon platform is going to be so disruptive to our everyday way of life that AI weapon platforms might turn out to be one of the more benign potentials.

 

Jobs is what keeps people busy ... or they start thinkin.

Link to comment
Share on other sites

If our future AI overlords are watching, I totally disagree with this post! :v:

 

On a more serious note, you have to wonder if this will end up being like the land mine issue at some point in the future, if you get what I mean.

Well the land mine issue grow out of control and started to even include cluster bombs, even sensor-fused ones by the popular definition.

 

I honestly think the more near term threat is the elimination of human beings from the military. If unscrupulous, tyrannical governments no longer need rely on humans to enforce their will, it will let them off the leash completely.


Edited by Emu
Link to comment
Share on other sites

Chappie...

 

It be cool if you could buy them, I'd buy anita/mia from channel 4's Human's, she is uber hawt for being a potential killer robot with functional A.I.

 

If only it was legal to marry her, she wouldnt complain about doing the dishes or cleaning the house and wouldnt say no when you wanna get jiggy wit her.

 

Just remember, if it all comes true then remember that I said I wanted to buy Anita/Mia first, the rest of you can have her psycho killer sister, she does come with some perks, she did work in a brothel for a short time before going on a killing spree, so atleast you know she will know how to get you turned on just seconds before she kills ya :)


Edited by bumfire
Link to comment
Share on other sites

Resistance is futile.... :D

 

So, after the ultimately successful (read: boring) "Alien vs. Predator" movie, will we now get "Borg vs. Terminator"?

 

Angels and ministers of grace defend us!

 

:lol:

 

 

Bones: "Angels and ministers of grace defend us!"

Spock: "Hamlet, Act I Scene IV."

Star Trek IV - The Voyage Home

 

Link to comment
Share on other sites

Yes, it's obviously much better to have a flaky, emotional and unpredictable human controlling weapons than it would be to have an efficient and indifferent robot or AI program that only uses them under strict and incorruptible criteria. Never mind that humans would be in control of any robot anyway so the point is moot really, pulling the trigger by programming an AI to do it or by paying a uniformed biological operator to do it amount to the same thing.

 

To paraphrase Jeremy Clarkson. "If computers controlled all the weapons it would probably be a good thing since they wouldn't ever work."

Link to comment
Share on other sites

Yes, it's obviously much better to have a flaky, emotional and unpredictable human controlling weapons than it would be to have an efficient and indifferent robot or AI program that only uses them under strict and incorruptible criteria.

 

And the AI would be programmed by... flaky, emotional and unpredictable humans. Yeah, what's the worst thing that could happen? :music_whistling:

 

Never mind that humans would be in control of any robot anyway so the point is moot really, pulling the trigger by programming an AI to do it or by paying a uniformed biological operator to do it amount to the same thing.

 

So which is it? Will robots make a kill-decision based on their programming, or will humans always be in control?

 

I guess to the target it doesn't make a whole lot of difference anyway. But I like the idea that an AI might decide to kill me based on its programming even less than the idea that a guy might decide to kill me because he's been ordered to do so.

 

To paraphrase Jeremy Clarkson. "If computers controlled all the weapons it would probably be a good thing since they wouldn't ever work."

 

Man I hope he's right about that. :thumbup:

Link to comment
Share on other sites

Have they really used the term "offensive"? If yes, then they sure are dumb ;) Lawyers rejoice. War reporting has gotten ridiculous already (no side is attacking, both are 'defenders') but with this. All countries will jump on that bandwagon.

Link to comment
Share on other sites

And the AI would be programmed by... flaky, emotional and unpredictable humans. Yeah, what's the worst thing that could happen? :music_whistling:

 

Yep, but you can probably run an AI in simulation mode until it gets it right I guess.

 

So which is it? Will robots make a kill-decision based on their programming, or will humans always be in control?

 

I guess to the target it doesn't make a whole lot of difference anyway. But I like the idea that an AI might decide to kill me based on its programming even less than the idea that a guy might decide to kill me because he's been ordered to do so.

 

 

 

Man I hope he's right about that. :thumbup:

 

I guess an AI has to be autonomous by definition and to be able to pass a Turing test and qualify as an AI so it must be indeterminable from a real person. That basically makes it a person for all intents and purposes which kind of makes the topic, and the proposed ban, a bit of a joke if you ask me. You might as well ban thunder or the tide from coming in.

 

Far be it for me to second guess Stevo Hawking and that Chomsky guy (What kind of name is Noam anyway? It sounds like something you find under tree bark.) but even if we do make an AI and then we don't weaponize it, guess what? Someone else will and we will be at a disadvantage. Sad but true.

 

Besides, there's not much point in banning stuff, that just makes it more expensive.

Link to comment
Share on other sites

Yep, but you can probably run an AI in simulation mode until it gets it right I guess.

 

Well there's been a lot of fiction about AI, trying to take different kinds of looks into the future.

 

Does HAL get it right? His (its?) decisions are simply based off his programming.

 

Data seems to be a pretty well balanced AI, but it took several attempts to get it right and his "brother" is responsible for quite a few deaths. And even so, Data struggles with his artificial heritage and tries to become more human.

 

The Terminator - is it an AI or just a cleverly programmed killer machine trying to mimic human beings?

 

I guess an AI has to be autonomous by definition and to be able to pass a Turing test and qualify as an AI so it must be indeterminable from a real person.

 

That's an interesting point. When we talk about DCS controlled wingmen, we call them AI, but I guess we agree that their behavior is actually far from being intelligent. They follow relatively simple routines in order to determine their course of action. So are they artificially intelligent or just a piece of code?

 

That basically makes it a person for all intents and purposes which kind of makes the topic, and the proposed ban, a bit of a joke if you ask me. You might as well ban thunder or the tide from coming in.

 

Wasn't it Asimov who declared laws for an AI, like "never harm a human"?

 

Why shouldn't we globally agree that this is a good basic rule for each and every AI and that it's outlawed to create an AI that doesn't follow this rule, in turn outlawing any such AI itself?

 

The question of whether a true AI is indistinguishable from humans, I have problems imagining such a creature. If we can't tell it from a human, isn't it a human then?

 

Far be it for me to second guess Stevo Hawking and that Chomsky guy (What kind of name is Noam anyway? It sounds like something you find under tree bark.) but even if we do make an AI and then we don't weaponize it, guess what? Someone else will and we will be at a disadvantage. Sad but true.

 

I disagree. You might argue that it's a tough world and we need to defend ourselves. Then a guy from the other end of the world reads your post and realizes he needs to defend himself from you because you made it clear you want weaponized AI. Welcome to an exquisitely vicious circle. :bomb:

 

Besides, there's not much point in banning stuff, that just makes it more expensive.

 

Like drugs, military grade weapons, tax evasion and nuclear bombs?

 

(Interestingly enough, for one of those examples I strongly disagree with its current ban, but discussing that would go way off-topic).

 

(Okay, you guessed right. I always wanted to own a few nukes for self protection :D)

 

The first option, to the bed of my knowledge.

 

By what definition?

 

I was asking the question specifically because it seemed to me JimmyBlonde made both points (AI will make their own kill decision vs. humans will always make the kill decision) in the same post.

 

But for the sake of arguing - if an I is not allowed to make a kill decision or go through with it, can it really be an AI then? :devil:

Link to comment
Share on other sites

Well the Terminator and Hal are science fiction. Isaac Asimov was a science fiction writer so his "laws" don't mean diddly in the real world even though they make great reading.

 

If you don't perpetuate the vicious circle you become a victim of it, idealism is notoriously vulnerable to small arms fire but, it might be the case that a truly sentient AI might itself refuse to be a weapon.

Link to comment
Share on other sites

but believes the development of robotic weapons, which it said would prove useful to terrorists, brutal dictators, and those wishing to perpetrate ethnic cleansing, is not.

 

Banning them allows terrorists and dictators to monopolize them. Due caution is warranted certainly, but I think the prediction being made might be biased.

Awaiting: DCS F-15C

Win 10 i5-9600KF 4.6 GHz 64 GB RAM RTX2080Ti 11GB -- Win 7 64 i5-6600K 3.6 GHz 32 GB RAM GTX970 4GB -- A-10C, F-5E, Su-27, F-15C, F-14B, F-16C missions in User Files

 

Link to comment
Share on other sites

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...