Download high-resolution image
Listen to a clip from the audiobook
audio pause button
0:00
0:00

The Coming Wave

Technology, Power, and the Twenty-first Century's Greatest Dilemma

Listen to a clip from the audiobook
audio pause button
0:00
0:00
NEW YORK TIMES BESTSELLER • An urgent warning of the unprecedented risks that AI and other fast-developing technologies pose to global order, and how we might contain them while we have the chance—from a co-founder of the pioneering artificial intelligence company DeepMind and current CEO of Microsoft AI

“A fascinating, well-written, and important book.”—Yuval Noah Harari

“Essential reading.”—Daniel Kahneman
“An excellent guide for navigating unprecedented times.”—Bill Gates

A Best Book of the Year: CNN, Economist, Bloomberg, Politico Playbook, Financial Times, The Guardian, CEO Magazine, Semafor Winner of the Inc. Non-Obvious Book Award • Finalist for the Porchlight Business Book Award and the Financial Times and Schroders Business Book of the Year Award

We are approaching a critical threshold in the history of our species. Everything is about to change. 
 
Soon you will live surrounded by AIs. They will organize your life, operate your business, and run core government services. You will live in a world of DNA printers and quantum computers, engineered pathogens and autonomous weapons, robot assistants and abundant energy. 
 
None of us are prepared.
 
As co-founder of the pioneering AI company DeepMind, part of Google, Mustafa Suleyman has been at the center of this revolution. The coming decade, he argues, will be defined by this wave of powerful, fast-proliferating new technologies. 
 
In The Coming Wave, Suleyman shows how these forces will create immense prosperity but also threaten the nation-state, the foundation of global order. As our fragile governments sleepwalk into disaster, we face an existential dilemma: unprecedented harms on one side, the threat of overbearing surveillance on the other. 
 
How do we ensure the flourishing of humankind? How do we maintain control? How do we navigate the narrow path to a successful future?
 
This groundbreaking book from the ultimate AI insider establishes “the containment problem”—the task of maintaining control over powerful technologies—as the essential challenge of our age.
The Containment Problem

Revenge Effects

Alan Turing and Gordon Moore could never have predicted, let alone altered the rise of, social media, memes, Wikipedia, or cyberattacks. Decades after their invention, the architects of the atomic bomb could no more stop a nuclear war than Henry Ford could stop a car accident. Technology’s unavoidable challenge is that its makers quickly lose control over the path their inventions take once introduced to the world.

Technology exists in a complex, dynamic system (the real world), where second-, third-, and nth-order consequences ripple out unpredictably. What on paper looks flawless can behave differently out in the wild, especially when copied and further adapted downstream. What people actually do with your invention, however well intentioned, can never be guaranteed. Thomas Edison invented the phonograph so people could record their thoughts for posterity and to help the blind. He was horrified when most people just wanted to play music. Alfred Nobel intended his explosives to be used only in mining and railway construction.

Gutenberg just wanted to make money printing Bibles. Yet his press catalyzed the Scientific Revolution and the Reformation, and so became the greatest threat to the Catholic Church since its establishment. Fridge makers didn’t aim to create a hole in the ozone layer with chlorofluorocarbons (CFCs), just as the creators of the internal combustion and jet engines had no thought of melting the ice caps. In fact early enthusiasts for automobiles argued for their environmental benefits: engines would rid the streets of mountains of horse dung that spread dirt and disease across urban areas. They had no conception of global warming.

Understanding technology is, in part, about trying to understand its unintended consequences, to predict not just positive spillovers but “revenge effects.” Quite simply, any technology is capable of going wrong, often in ways that directly contradict its original purpose. Think of the way that prescription opioids have created dependence, or how the overuse of antibiotics renders them less effective, or how the proliferation of satellites and debris known as “space junk” imperils spaceflight.

As technology proliferates, more people can use it, adapt it, shape it however they like, in chains of causality beyond any individual’s comprehension. As the power of our tools grows exponentially and as access to them rapidly increases, so do the potential harms, an unfolding labyrinth of consequences that no one can fully predict or forestall. One day someone is writing equations on a blackboard or fiddling with a prototype in the garage, work seemingly irrelevant to the wider world. Within decades, it has produced existential questions for humanity. As we have built systems of increasing power, this aspect of technology has felt more and more pressing to me. How do we guarantee that this new wave of technologies does more good than harm?

Technology’s problem here is a containment problem. If this aspect cannot be eliminated, it might be curtailed. Containment is the overarching ability to control, limit, and, if need be, close down technologies at any stage of their development or deployment. It means, in some circumstances, the ability to stop a technology from proliferating in the first place, checking the ripple of unintended consequences (both good and bad).

The more powerful a technology, the more ingrained it is in every facet of life and society. Thus, technology’s problems have a tendency to escalate in parallel with its capabilities, and so the need for containment grows more acute over time.

Does any of this get technologists off the hook? Not at all; more than anyone else it is up to us to face it. We might not be able to control the final end points of our work or its long-term effects, but that is no reason to abdicate responsibility. Decisions technologists and societies make at the source can still shape outcomes. Just because consequences are difficult to predict doesn’t mean we shouldn’t try.
“A heartfelt and candid exploration of what the future may hold for us . . . Eloquently articulated. Reading [The Coming Wave], what came to mind was Gramsci’s famous adage that what we need is ‘pessimism of the intellect, optimism of the will.’ To his great credit, [Mustafa] Suleyman has both.”—The Guardian

“[A] sweeping look at the future of artificial intelligence and other transformative technologies . . . [Mustafa] Suleyman is intimately familiar with the technologies, companies and personalities at the heart of the A.I. revolution.”—The New York Times

“Dazzling . . . You have by now read a great deal of both hype and doom-mongering on the subject [of AI]. But Suleyman’s is the book you cannot afford not to read. . . . Brilliant.”Niall Ferguson, Bloomberg

“Brilliant . . . confronts what may be the most crucial question of our century: How can we ensure that the breathtaking, fast-paced technological revolutions ahead create the world we want?”—Eric Lander, founding director, Broad Institute of MIT and Harvard

“An erudite, clear-eyed guide both to the history of radical technological change and to the deep political challenges that lie ahead.”—Anne Applebaum, Pulitzer Prize–winning historian

“Extraordinary . . . utterly unmissable.”—Eric Schmidt, former CEO, Google

“Calm, pragmatic, and deeply ethical . . . enthralling reading.”—Angela Kane, former UN under-secretary-general

“Sharp, compassionate, and uncompromising.”—Qi Lu, former COO, Baidu; former EVP, Microsoft Bing

“Truly remarkable, ambitious, and impossible to ignore . . . a persuasively argued tour de force.”—Nouriel Roubini, professor emeritus, New York University

“A practical and optimistic road map.”—Stuart Russell, professor of computer science, University of California, Berkeley

“A panoramic survey and a clarion call to action . . . Everyone should read it.”—Fei-Fei Li, co-director, Stanford’s Institute for Human-Centered AI

“A brave wake-up call . . . indispensable reading.”—Tristan Harris, co-founder, Center for Humane Technology

“An extraordinary and necessary book . . . One leaves energized and thrilled to be alive right now.”—Alain de Botton, philosopher and bestselling author

“Deeply researched and highly relevant.”—Al Gore, former Vice President of the United States

“Read this essential book to understand the pace and scale of these technologies.”—Ian Bremmer, founder, Eurasia Group and bestselling author of The Power of Crisis

“Thought-provoking, urgent and written in powerful, highly accessible prose.”—Erik Brynjolfsson, Director of the Stanford Digital Economy Lab

“Deeply rewarding and consistently astonishing.”—Stephen Fry, actor, broadcaster and bestselling author

“Realistic, deeply informed, and highly accessible.”—Jack Goldsmith, Learned Hand Professor of Law, Harvard University
© Chris Wilson
Mustafa Suleyman is the CEO of Microsoft AI. Previously he co-founded and was the CEO of Inflection AI, and he also co-founded DeepMind, one of the world's leading AI companies. View titles by Mustafa Suleyman

About

NEW YORK TIMES BESTSELLER • An urgent warning of the unprecedented risks that AI and other fast-developing technologies pose to global order, and how we might contain them while we have the chance—from a co-founder of the pioneering artificial intelligence company DeepMind and current CEO of Microsoft AI

“A fascinating, well-written, and important book.”—Yuval Noah Harari

“Essential reading.”—Daniel Kahneman
“An excellent guide for navigating unprecedented times.”—Bill Gates

A Best Book of the Year: CNN, Economist, Bloomberg, Politico Playbook, Financial Times, The Guardian, CEO Magazine, Semafor Winner of the Inc. Non-Obvious Book Award • Finalist for the Porchlight Business Book Award and the Financial Times and Schroders Business Book of the Year Award

We are approaching a critical threshold in the history of our species. Everything is about to change. 
 
Soon you will live surrounded by AIs. They will organize your life, operate your business, and run core government services. You will live in a world of DNA printers and quantum computers, engineered pathogens and autonomous weapons, robot assistants and abundant energy. 
 
None of us are prepared.
 
As co-founder of the pioneering AI company DeepMind, part of Google, Mustafa Suleyman has been at the center of this revolution. The coming decade, he argues, will be defined by this wave of powerful, fast-proliferating new technologies. 
 
In The Coming Wave, Suleyman shows how these forces will create immense prosperity but also threaten the nation-state, the foundation of global order. As our fragile governments sleepwalk into disaster, we face an existential dilemma: unprecedented harms on one side, the threat of overbearing surveillance on the other. 
 
How do we ensure the flourishing of humankind? How do we maintain control? How do we navigate the narrow path to a successful future?
 
This groundbreaking book from the ultimate AI insider establishes “the containment problem”—the task of maintaining control over powerful technologies—as the essential challenge of our age.

Excerpt

The Containment Problem

Revenge Effects

Alan Turing and Gordon Moore could never have predicted, let alone altered the rise of, social media, memes, Wikipedia, or cyberattacks. Decades after their invention, the architects of the atomic bomb could no more stop a nuclear war than Henry Ford could stop a car accident. Technology’s unavoidable challenge is that its makers quickly lose control over the path their inventions take once introduced to the world.

Technology exists in a complex, dynamic system (the real world), where second-, third-, and nth-order consequences ripple out unpredictably. What on paper looks flawless can behave differently out in the wild, especially when copied and further adapted downstream. What people actually do with your invention, however well intentioned, can never be guaranteed. Thomas Edison invented the phonograph so people could record their thoughts for posterity and to help the blind. He was horrified when most people just wanted to play music. Alfred Nobel intended his explosives to be used only in mining and railway construction.

Gutenberg just wanted to make money printing Bibles. Yet his press catalyzed the Scientific Revolution and the Reformation, and so became the greatest threat to the Catholic Church since its establishment. Fridge makers didn’t aim to create a hole in the ozone layer with chlorofluorocarbons (CFCs), just as the creators of the internal combustion and jet engines had no thought of melting the ice caps. In fact early enthusiasts for automobiles argued for their environmental benefits: engines would rid the streets of mountains of horse dung that spread dirt and disease across urban areas. They had no conception of global warming.

Understanding technology is, in part, about trying to understand its unintended consequences, to predict not just positive spillovers but “revenge effects.” Quite simply, any technology is capable of going wrong, often in ways that directly contradict its original purpose. Think of the way that prescription opioids have created dependence, or how the overuse of antibiotics renders them less effective, or how the proliferation of satellites and debris known as “space junk” imperils spaceflight.

As technology proliferates, more people can use it, adapt it, shape it however they like, in chains of causality beyond any individual’s comprehension. As the power of our tools grows exponentially and as access to them rapidly increases, so do the potential harms, an unfolding labyrinth of consequences that no one can fully predict or forestall. One day someone is writing equations on a blackboard or fiddling with a prototype in the garage, work seemingly irrelevant to the wider world. Within decades, it has produced existential questions for humanity. As we have built systems of increasing power, this aspect of technology has felt more and more pressing to me. How do we guarantee that this new wave of technologies does more good than harm?

Technology’s problem here is a containment problem. If this aspect cannot be eliminated, it might be curtailed. Containment is the overarching ability to control, limit, and, if need be, close down technologies at any stage of their development or deployment. It means, in some circumstances, the ability to stop a technology from proliferating in the first place, checking the ripple of unintended consequences (both good and bad).

The more powerful a technology, the more ingrained it is in every facet of life and society. Thus, technology’s problems have a tendency to escalate in parallel with its capabilities, and so the need for containment grows more acute over time.

Does any of this get technologists off the hook? Not at all; more than anyone else it is up to us to face it. We might not be able to control the final end points of our work or its long-term effects, but that is no reason to abdicate responsibility. Decisions technologists and societies make at the source can still shape outcomes. Just because consequences are difficult to predict doesn’t mean we shouldn’t try.

Praise

“A heartfelt and candid exploration of what the future may hold for us . . . Eloquently articulated. Reading [The Coming Wave], what came to mind was Gramsci’s famous adage that what we need is ‘pessimism of the intellect, optimism of the will.’ To his great credit, [Mustafa] Suleyman has both.”—The Guardian

“[A] sweeping look at the future of artificial intelligence and other transformative technologies . . . [Mustafa] Suleyman is intimately familiar with the technologies, companies and personalities at the heart of the A.I. revolution.”—The New York Times

“Dazzling . . . You have by now read a great deal of both hype and doom-mongering on the subject [of AI]. But Suleyman’s is the book you cannot afford not to read. . . . Brilliant.”Niall Ferguson, Bloomberg

“Brilliant . . . confronts what may be the most crucial question of our century: How can we ensure that the breathtaking, fast-paced technological revolutions ahead create the world we want?”—Eric Lander, founding director, Broad Institute of MIT and Harvard

“An erudite, clear-eyed guide both to the history of radical technological change and to the deep political challenges that lie ahead.”—Anne Applebaum, Pulitzer Prize–winning historian

“Extraordinary . . . utterly unmissable.”—Eric Schmidt, former CEO, Google

“Calm, pragmatic, and deeply ethical . . . enthralling reading.”—Angela Kane, former UN under-secretary-general

“Sharp, compassionate, and uncompromising.”—Qi Lu, former COO, Baidu; former EVP, Microsoft Bing

“Truly remarkable, ambitious, and impossible to ignore . . . a persuasively argued tour de force.”—Nouriel Roubini, professor emeritus, New York University

“A practical and optimistic road map.”—Stuart Russell, professor of computer science, University of California, Berkeley

“A panoramic survey and a clarion call to action . . . Everyone should read it.”—Fei-Fei Li, co-director, Stanford’s Institute for Human-Centered AI

“A brave wake-up call . . . indispensable reading.”—Tristan Harris, co-founder, Center for Humane Technology

“An extraordinary and necessary book . . . One leaves energized and thrilled to be alive right now.”—Alain de Botton, philosopher and bestselling author

“Deeply researched and highly relevant.”—Al Gore, former Vice President of the United States

“Read this essential book to understand the pace and scale of these technologies.”—Ian Bremmer, founder, Eurasia Group and bestselling author of The Power of Crisis

“Thought-provoking, urgent and written in powerful, highly accessible prose.”—Erik Brynjolfsson, Director of the Stanford Digital Economy Lab

“Deeply rewarding and consistently astonishing.”—Stephen Fry, actor, broadcaster and bestselling author

“Realistic, deeply informed, and highly accessible.”—Jack Goldsmith, Learned Hand Professor of Law, Harvard University

Author

© Chris Wilson
Mustafa Suleyman is the CEO of Microsoft AI. Previously he co-founded and was the CEO of Inflection AI, and he also co-founded DeepMind, one of the world's leading AI companies. View titles by Mustafa Suleyman

2025 Catalog for First-Year & Common Reading

We are delighted to present our new First-Year & Common Reading Catalog for 2025! From award-winning fiction, poetry, memoir, and biography to new books about science, technology, history, student success, the environment, public health, and current events, the titles presented in our common reading catalog will have students not only eagerly flipping through the pages,

Read more

Videos from the 2024 First-Year Experience® Conference are now available

We’re pleased to share videos from the 2024 First-Year Experience® Conference. Whether you weren’t able to join us at the conference or would simply like to hear the talks again, please take a moment to view the clips below.   Penguin Random House Author Breakfast Monday, February 19th, 7:15 – 8:45 am PST This event

Read more