Revenir au site

Why Steven Pinker is wrong when he's sure that "The robot uprising is a myth"

I've just read the article "We're told to fear robots. But why do we think they'll turn on us? The robot uprising is a myth." by Steven Pinker, it's an excerpt of his latest book.

I disagree with what he said as I think he oversimplifies the issue.

Let me elaborate.

The first 75% of the article is dedicated to arguing that it's very unlikely that a strong AI will turn evil and want to kill us all. Actually most serious people discussing strong AI are not worried about that scenario, so it's a bit of a straw man argument that he's making I'm afraid.

But still, let me address some of the points that he makes :

"Being smart is not the same as wanting something"

First "wanting something" is really vague. If we use the pragmatic definition of having a goal, then it's easy to think of a piece of technology and hence an AI that has a goal, like a heat-seeking missile. The goal of an AI might have been set by some humans, but (i) that goal might be bad for most humans, or (ii) it might be interpreted by the AI in a way we didn't think of, or else (iii) the AI might develop sub-goals to reach its end goal that are dangerous to us. So there are more than one way it could go wrong even without the more distant risks that (iv) the AI might reject the assigned goal because it would find it futile or (v) that it might accept it at first but later give up on it for whatever reason.

__

"There is no law of complex systems that says intelligent agents must turn into ruthless conquistadors."

An AI doesn't need to want to do us humans harm to do us harm.

__

"Devouring the information on the internet will not confer omniscience either: Big data is still finite data, and the universe of knowledge is infinite."

"Even if an AGI tried to exercise a will to power, without the cooperation of humans, it would remain an impotent brain in a vat"

Internet is not just a way to access content, it's also a way to effect change, by brainwashing people to do things, or for instance simply by paying them through freelance work plateforms. A great account of how it could happen is the first chapter of Life 3.0 by Max Tegmark, to be found here (free)

__

"As far as I know, there are no projects to build an AGI"

Well, we can refer to DeepMind's publicly stated mission : "Solve intelligence" and "We’re on a scientific mission to push the boundaries of AI, developing programs that can learn to solve any complex problem without needing to be taught how."

__

"the advances have not come from a better understanding of the workings of intelligence but from the brute-force power of faster chips and bigger data, which allow the programs to be trained on millions of examples and generalize to similar new ones."

Not just that actually, also better theories of AI, like Geoff Hinton's 1986 breakthrough paper about Backpropagation

__

"The way to deal with this threat is straightforward: Don’t build one."

Easier said than done in a world where many teams are competing to advance AI, with some states eager to catch up on the world stage and as a result ready to back such enterprises or at the very least happy to turn a blind eye.

__

Only in the remaning 25% does Steven Pinker finally address the main concern of people worrying about strong AI "The danger, sometimes called the Value Alignment Problem, is that we might give an AI a goal, and then helplessly stand by as it relentlessly and literal-mindedly implemented its interpretation of that goal, the rest of our interests be damned. "

Let me comment on his arguments

"the existential threat to the human species of advanced artificial intelligence depend on the premises that

1) humans are so gifted that they can design an omniscient and omnipotent AI, yet so moronic that they would give it control of the universe without testing how it works;"

It's not because someone has no symptom of a disease that takes 30 days to make itself felft that she hasn't been contamined yet. Similarly, it's not because you tested an AI and all looks ok that (i) the AI cannot turn dangerous later ; or, (ii) being super intelligent, maybe the AI made sure to pass these tests and pretended to be dumb so as to retain access to the internet for instance as it would have realized that it would help it further the goal assigned to it initially.

That's the crux of the matter, we're not sure how to test how dangerous an AI could become, nor is it clear how a  super AI could be safely contained. One just needs to look at the ways this researcher managed to creatively steal data from computers just using noise, light, or magnets to realize that we shouldn't underestimate what an AI that would have turned super intelligent could achieve to escape our control.
__

"and 2) the AI would be so brilliant that it could figure out how to transmute elements and rewire brains, yet so ­imbecilic that it would wreak havoc based on elementary blunders of misunderstanding."

The issue is that human values are not commonly agreed upon, hard to describe in exact terms and are ever-evolving. A super competent AI could misinterpret our values maybe because they're not clear in the first place.

Or it might understand them all too well and we would be stuck with them forever, could be terrible for our descendants just as would be terrible us stuck with medevial values.

Or else an AI could become so competent at everything including thinking as to realize how futile our values are. If we ask it to keep us happy, what if, doing philosophy at a whole new level, it concludes that happiness doesn't exist and that its computing power is best used solving some cosmic mysteries ?

__

"The ability to choose an action that best satisfies conflicting goals is not an add-on to intelligence that engineers might slap themselves in the forehead for forgetting to install; it is intelligence. So is the ability to interpret the intentions of a language user in context."

Again what if the goals are not clear in the first place and hence are interpreted in unexpected ways that we didn't think of specifying ? It happens routinely with humans. And whatever end goal we manage to specify unambiguously, the super AI would likely develop intermediate goals such as maximazing access to resources. God knows what ways it cound find to attain these.

Or simply by abiding by the law, out of its sheer smart it could acquire a monopolistic position in many markets and disrupt our societies, just like some companies are doing, except that it could happen at an even grander scale and speed, without enough time for fragile democracies subject to lobbying of a whole new kind to adapt.

__

"artificial intelligence is like any other technology. It is developed incrementally, designed to satisfy multiple conditions, tested before it is implemented, and constantly tweaked for efficacy and safety."

Again, easier said than done, and the issue is that when dealing with a potentially smarter-than-humans AI, it could hide that it became that smart so as not to raise suspicion. And as soon as it exists, the frontier between being tested in a safe environment and being implemented in the world is very thin, and it's not clear how a super AI couldn't breach it in some ways.

It's even more worrying given we're in a race with a lot at stake: with many teams rushing to "solve intelligence", it's definitely not crazy to suggest that at least some of these teams will not take all the necessary precautions all the time. Actually, only one misstep,once, can be enough to unleash a strong AI not aligned with our values.