We don’t really want intelligent machines

On McKinsey’s website, leading artificial-intelligence researcher Stuart Russell shares in a conversation with James Manyika why a new approach for AI is necessary. A very interesting read from which the last of the Q&A is included here. Prof. Russell pinpoints the current desire for the application of AI as a partner in a dependent decision making relationship. He rightfully outlines that the full potential is many times larger than we (humans) can ever imagine.
 
At MUUTAA, we seek for that acceptable and beneficial application for now, considering the ongoing journey and the awareness of the uncharted potential we will encounter.
 
James Manyika: The notion you described, these kind of provably beneficial systems—what makes it beneficial, by definition?
 
Stuart Russell: We don’t really want intelligent machines, in the sense of machines that pursue objectives that they contain. What we want are machines that are beneficial to us.
What we’re actually doing is instead of writing algorithms that find optimal solutions for a fixed objective, we write algorithms that solve this problem, the problem of functioning as sort of one half of a combined system with humans. This actually makes it a game-theoretic problem because now there are two entities, and you can solve that game. And the solution to that game produces behavior that is beneficial to the human.
 
Let me just illustrate the kinds of things that fall out as solutions to this problem—what behavior do you get when you build machines that way? For example, asking permission. Let’s say the AI has information, for example, that we would like a cup of coffee right now, but it doesn’t know much about our price sensitivity. The only plan it can come up with, because we’re in the Georges V in Paris, is to go ask for a cup of coffee that costs €13. The AI should come back and say, “Would you still like the coffee at €13, or would you prefer to wait another ten minutes and find another cafe to get a coffee that’s cheaper?” That’s, in a microcosm, one of the things it does—it asks permission.
 
It allows itself to be switched off because it wants to avoid doing anything that’s harmful to us. But it knows that it doesn’t know what constitutes harm. If there was any reason why the human would want to switch it off, then it’s happy to be switched off because it wants to avoid doing whatever it is that the human is trying to prevent it from doing. That’s the exact opposite of the machine with a fixed objective, which actually will take steps to prevent itself from being switched off because that would prevent it from achieving the objective.
 
When you solve this game, where the machine’s half of the game is basically trying to be beneficial to the human, it will do things to learn more, and asking permission allows it to learn more about your preferences and it will allow itself to be switched off. It’s basically deferential to the human. And it doesn’t matter—unlike the case where you’ve got the fixed objective, which is wrong, the more intelligent you make the machine, the worse things get, the harder it is to switch it off, the more far-reaching the impact on the world is going to be. Whereas with this approach, the more intelligent the machine, the better off you are.
Because it will be better at learning your preferences. It will be better at satisfying them. And that’s what we want. I believe that this is the core. I think there’s lots of work still to do, but this is the core of a different approach to what AI should’ve been all along.”

Leave a Reply