(NEW YORK) — A new book by two artificial intelligence researchers claims that the race to build superintelligent AI could spell doom for humanity.

In “If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All,” authors Eliezer Yudkowsky and Nate Soares claim that AI development is moving too fast and without proper safety measures.

“We tried a whole lot of things besides writing a book, and you really want to try all the things you can if you’re trying to prevent the utter extinction of humanity,” Yudkowsky told ABC News.

Yudkowsky says major tech companies claim superintelligent AI — a hypothetical form of AI that could possess intellectual abilities far exceeding humans — could arrive within two to three years. But he warns these companies may not fully understand the r

See Full Page