To what extent might Stephen Hawking and Elon Musk be right about the dangers of artificial intelligence?

structure3

Suzanne Sadedin, an evolutionary biologist, writes: I think they are right that AI is dangerous, and they are dangerously wrong about why. I see two fairly likely futures.

Future 1: AI destroys itself, humanity and most or all life on earth, probably a lot sooner than in 1000 years.

Future 2: Humanity radically restructures its institutions to empower individuals, probably via transhumanist modification that effectively merges us with AI. We go to the stars.

Right now, we are headed for Future 1, but we could change this. Much as I admire Elon Musk, his plan to democratise AI actually makes Future 1 more, not less, likely.

Here’s why:

There’s a sense in which humans are already building a specific kind of AI; indeed, we’ve been gradually building it for centuries. This kind of AI consists of systems that we construct and endow with legal, real-world power. These systems create their own internal structures of rules and traditions, while humans perform fuzzy brain-based tasks specified by the system. The system as a whole can act with an appearance of purpose, intelligence and values entirely distinct from anything exhibited by its human components.

All nations, corporations and organisations can be considered as this kind of AI. I realise at this point it may seem like I’m bending the definition of AI. To be clear, I’m not suggesting organisations are sentient, self-aware or conscious, but simply that they show emergent, purpose-driven behaviour equivalent to that of autonomous intelligent agents. For example, we talk very naturally about how “the US did X”, and that means something entirely different from “the people of the US did X” or “the president of the US did X”, or even “the US government did X”.

These systems can be entirely ruthless toward individuals (just check the answers to What are some horrifying examples of corporate evil/greed? and What are the best examples of actions that are moral, even uplifting, but illegal? if you don’t believe me). Such ruthlessness is often advantageous — even necessary, because these systems exist in a competitive environment. They compete for human effort, involvement and commitment. Money and power. That’s how they survive and grow. New organisations, and less successful ones, copy the features of dominant organisations in order to compete. This places them under Darwinian selection, as Milton Friedman noted long ago.

Until recently, however, organisations have always relied upon human consent and participation; human brains always ultimately made the decisions, whether it was a decision to manufacture 600 rubber duckies or drop a nuclear bomb. So their competitive success has been somewhat constrained by human values and morals; there are not enough Martin Shkrelis to go around.

With the advent of machine learning, this changes. We now have algorithms that can make complex decisions better and faster than any human, about practically any specific domain. They are being applied to big data problems far beyond human comprehension. Yet these algorithms are still stupid in some ways. They are designed to optimise specific parameters for specific datasets, but they’re oblivious to the complexity of the real-world, long-term ramifications of their choices. [Continue reading…]

Print Friendly, PDF & Email
Facebooktwittermail