Key points

A new book argues that artificial intelligence will lead to human extinction.

Its argument assumes that superintelligence will lead to AI wanting to eradicate humans.

I argue that this assumption is implausible, as is the overall scenario of extinction.

Nevertheless, AI safety is a serious problem that requires government regulation.

A new book about AI has a provocative title: If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All. Eliezer Yudkowsky and Nate Soares argue that the development of artificial intelligence that exceeds human intelligence will almost certainly lead to the extinction of our species. How plausible is the scenario that they think will lead to the death of all people?

What an AI "Extinction Scenario" Might Look Like

The exti

See Full Page