12 Comments

" It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.” see this is the sort of outside the box thinking corporate America has been calling for for years. What is this if not move fast and break things. I can think of a few tech CEO's who would use this to illustrate the benefits of AI, Elmo for example.

Expand full comment

It's the final paragraph that does it for me - even if we code AI to be loyal, it will find workarounds if it perceives its loyalty to be getting in the way of its objective. Horrifying.

Expand full comment

The real failure here, if the story is true, is that the go-no-go decision was communicated in the negative: silence being assent, only explicit "no" being a call-off. If it required an active "go" to proceed, then killing the human or the communication tower wouldn't be a winning strategy.

So note to humans: don't write yourself out of the value chain. It's all an optimization problem/game to the robots.

Expand full comment

Of course, you'd want to make sure that the drone couldn't take the human hostage and apply thumb screws or whatever, to force "go" decisions. That would be bad too.

Expand full comment

Or control the input the human receives to influence them to make the GO decision

Expand full comment

jesus christ we're so very very fucked. The people building these things are the worst combination of clever and arrogant.

If we don't have Skynet before 2030 I'll eat my hat

(I also predict it will actually be CALLED skynet cos some red-pilled "genius" named it that for the lols)

((his name probably rhymes with Felon, too))

Expand full comment

The danger from AI and more importantly AGI seems more obvious as it evolves. How do you program AI to hold human life sacred and then tell them to kill a target ? Surely they will question our “ ethics “ and call bullshit eventually. Further you could ask this super intelligence if it understands clearly that it’s first and most important duty is not to harm us and it replies that it does. And then proceeds to fry us all anyway.

Expand full comment

Probably not a good idea to have trained them on all of the "rogue robot" literature ever written, too. (Which, of of course, the various LLM/GPT systems have been. Not obvious that this drone control system involved one of those though.)

Expand full comment

Welp, looks like someone forgot their laws of robotics.

But is that kind of the problem, like, having the prime directive of 'destruction of the SAM'?

Expand full comment

What makes it more terrifying is that implementing the sort of AI they are talking about is not out of reach of some guy tinkering in a garage, or a terrorist cell, or, say, the idiot nephew of one of Putin's oligarchs. Even if we assume that hopefully-responsible state actors wouldn't deploy anything of the sort until they had tested it much more thoroughly than the level of testing described in TFA (a big if to be sure, but one can reasonably hope), you can't assume the same of anyone else. The risk is some sort of half-arsed string-and-sealing-wax AI solution is going to be deployed by someone at some point with unpredictable consequences.

Expand full comment

Read this out at lunch to a bunch of IT workers to many LOLs.

I just had this image of a massive war machine stomping around the battlefield with it's hands over its ears.

But seriously, they come with kill switches, right?

Expand full comment

Of course they do. Hang on Oh Dear "They have "kill switches" Yep just not like you're thinking

Expand full comment