Move Fast, Break Civilization?
The Coming Wave exposes the ugly truth: innovation without control leads to collapse—or control without freedom.
1. What If the Future Fights Back?
A builder’s take on Mustafa Suleyman’s “The Coming Wave”
Imagine you're building something powerful—maybe the next generative AI product, a robotics platform, or even a biohealth tool. You're solving real problems, moving fast, iterating weekly.
Now ask yourself: What if the very tools you're building become impossible to contain?
That’s not a Black Mirror prompt. It’s the core argument of Mustafa Suleyman’s The Coming Wave: Technology, Power, and the Twenty-First Century’s Greatest Dilemma.
Suleyman isn’t some academic doomsayer. He’s the cofounder of DeepMind, the lab behind AlphaGo, AlphaFold, and currently CEO of Microsoft AI. He’s also been in rooms with government regulators, startup founders, and global leaders. This book is his urgent message to the rest of us:
“Technology has escaped containment. The challenge of our time is to bring it back within bounds—before it’s too late.”
What This Book Is About (In Plain Terms)
Here’s the big idea:
We're entering an era of dual-use exponential technologies—AI and synthetic biology in particular—that are growing so powerful, so fast, and so accessible that traditional systems can no longer contain them. Governments are too slow, companies are too incentivized to ship, and individuals have more leverage than ever before.
The first half of the book explores why this wave is uncontainable by default:
Powerful tech always leaks and spreads.
Tools like CRISPR and large language models are cheap, fast, and decentralized.
Even one bad actor could cause disproportionate harm.
The second half explores what we can do:
Though containment seems impossible, Suleyman argues it must be possible.
He lays out a 10-part plan—from building safer tech infrastructure to creating global institutions to manage risk.
The goal isn't to halt innovation, but to steer it intentionally.
Along the way, you’ll find stories from DeepMind’s AI breakthroughs, Ukraine’s drone war, pandemic labs, and early biotech labs—each illustrating just how fast power is shifting from institutions to individuals with access to code, compute, and cells.
What This Review Covers (So You Can Choose to Stay or Go)
This review is meant for people who build, fund, or follow the future. It’s especially for those asking hard questions like:
“How do we scale responsibly?”
“How much risk am I introducing when I ship faster than I can govern?”
“Can we innovate without slipping into dystopia?”
Here’s how the rest of the post breaks down:
Why This Book Matters Now
A quick look at why The Coming Wave resonates in 2025—and why it’s a must-read for tech builders and early-stage investors.The Core Thesis: Containment Is (Almost) Impossible
A look at why technology leaks, spreads, and fights back when you try to control it. You’ll see why CRISPR isn’t like nuclear tech—and why that matters.The Two Fronts: Intelligence and Life
How AI and synthetic biology are transforming into meta-technologies—and what that means for products, regulation, and risk.What Makes This Wave Different
Four features that make this moment historically unique: asymmetry, hyper-evolution, omni-use, and autonomy.The Incentives Trap
Why even well-meaning companies can’t stop racing forward—and what this means for startups and open-source communities.A State of Institutional Failure
Why nation-states and regulators are falling behind—and why tech companies now act more like micro-sovereigns.The Dilemma: Collapse vs Control
The moral fork in the road: inaction risks catastrophe; overreaction risks authoritarianism.The Path Forward: Containment Must Be Possible
Suleyman’s 10-point strategy for navigating the coming decades, from technical guardrails to new governance models.Takeaways for Builders, Founders, and Investors
Actionable insights for people working at the edge of innovation. What you can do today to build responsibly.Final Thoughts: Hopeful, Not Naive
Why I think this book should be on your shelf—not to scare you, but to make you a sharper, more ethical strategist.
TL;DR for the Time-Starved Reader
If you’re just skimming:
The Coming Wave is a sober, urgent, and readable call to rethink our relationship with the technologies we create.
AI and biotech aren’t just tools. They’re civilization-level forces—powerful enough to reshape economies, governments, and daily life.
The core dilemma: if we fail to contain these forces, we risk catastrophic events. But if we try to over-control them, we risk dystopia.
Suleyman offers a blueprint to thread that needle—layered, imperfect, but necessary.
“The coming wave cannot be stopped. But it can be contained. If we start now.”
2. Why This Book Matters Now
If you're building anything that touches code, compute, data, biology, or markets—The Coming Wave should be on your radar.
We’ve read books before that warned of disruption or dreamed of abundance. But this book isn’t either of those. It’s a strategy guide for an age of runaway tools—tools that can do immense good, but which may also spiral beyond our control.
And unlike the sci-fi handwaving you often find in books about “the future,” this one is grounded. Suleyman’s been in the rooms where it happened: cofounding DeepMind, scaling applied AI at Google, advising regulators and military leaders. He’s not speculating. He’s reporting.
“We are leaving the Anthropocene, the age in which human activity was the dominant influence on Earth. We are entering a new age: one defined by nonhuman agency.”
Let that sink in.
This isn’t just about new tech stacks or new markets. It’s about nonhuman systems—AI agents, synthetic organisms, bioengineered materials—making decisions, taking actions, and shaping outcomes with or without us.
And that makes this book feel urgent. Because while most product teams focus on the next sprint or roadmap milestone, Suleyman is looking several orders of magnitude ahead—and asking:
What happens when the tools we build begin building themselves?
A Snapshot of Where We Are in 2025
Let’s put this in context. As of now:
GPT-5 is rumored to be training. Claude 3.7 and Gemini 2.5 are in active use across enterprises.
Synthetic biology startups are creating designer enzymes and self-healing materials.
AI copilots are writing code, composing music, crafting campaigns, and generating research-grade simulations.
The cost to train powerful models or edit genes continues to drop dramatically.
In other words: the wave is no longer coming. It’s already here.
What Suleyman adds is structure: a mental map of where this is heading, how fast, and what’s at stake. He makes the case that if we don’t find ways to contain and shape these technologies—not stop them, but guide them—we risk sleepwalking into catastrophe or authoritarianism.
Where This Book Fits on the Map
To help position it: The Coming Wave is not techno-utopian (The Second Machine Age, Abundance), and it’s not wholly alarmist (Life 3.0, Surveillance Capitalism).
It’s closer to:
Tristan Harris meets George Kennan — a blend of ethical urgency and realpolitik strategy.
A sibling to The Age of AI (Kissinger, Schmidt, Huttenlocher) but more focused on practical governance and product-level implications.
A step beyond Tools and Weapons (Brad Smith), in that it assumes tech companies are no longer just companies—they’re becoming sovereign-like actors.
And for those of us in the product, founder, or early-stage world, this lens matters. Because it asks:
What does “product-market fit” look like when the product is a self-improving machine and the market is civilization?
Who Should Read This
Founders wrestling with tradeoffs between speed and safety.
Product managers designing AI-first systems and thinking about user control.
Investors making bets on frontier tech and wondering how regulation and ethics might shape outcomes.
Engineers and researchers who want to build responsibly—but also don’t want to be left behind.
And yes, policymakers and the “concerned public,” too—but this review is for the builders.
Because the stakes aren’t just moral. They’re strategic.
If you don’t understand the coming wave, you might be building on sand.
3. The Core Thesis: Containment Is (Almost) Impossible
The book opens with a bold claim: we won’t be able to contain the next wave of technologies—at least not in the way we’ve imagined.
In Chapters 1 through 3, Suleyman makes the case that containment—meaning the ability to prevent or limit the harmful spread and use of powerful tech—is no longer feasible by traditional means. He walks us through historical examples, systemic forces, and current trends to show how and why these technologies tend to escape, evolve, and multiply once released.
This isn't theoretical. It's how technology actually behaves in the real world.
As Suleyman puts it:
“Technologies always leak. They are copied, repurposed, iterated. Their effects compound and amplify. Containment is not a feature of how we’ve historically treated invention—it is the exception.”
That’s the core problem this book is trying to confront.
Technology Doesn’t Just Spread. It Proliferates.
In Chapter 2, Suleyman shares the story of the internal combustion engine. The first version was patented in 1876. Today, we have over a billion cars on the planet.
But that story is not just about demand. It’s about diffusion. One idea—burning fuel inside a chamber to create motion—spawned a global infrastructure, transformed geopolitics, and became embedded in everything from trucks to tanks to lawnmowers.
The same dynamic, Suleyman argues, is now at work with AI and synthetic biology. But with one key difference:
Unlike cars or even electricity, today’s technologies are cheaper to build, easier to copy, and faster to evolve.
You don’t need a factory to create dangerous power. You need code, compute, and maybe a few lab instruments.
Why Tech Leaks: The Four Forces That Undermine Control
Suleyman identifies four reasons why containment fails—especially with dual-use technologies:
Low Barriers to Entry – Tools like CRISPR kits or open-source AI models are becoming accessible to graduate students, hobbyists, and startups worldwide.
Decentralized Development – Innovation no longer requires institutions. Small teams—or even solo developers—can ship globally.
Asymmetric Leverage – A small group with enough intent and technical skill can cause massive harm, from misinformation to engineered pathogens.
Incentives to Move Fast – Governments and companies feel constant pressure to stay ahead. This creates a “race to deploy,” not a pause for reflection.
“We’ve created a world in which risk scales faster than safety.”
These forces don’t just apply to fictional threats. They already apply to real ones—like AI-generated misinformation, ransomware attacks, or synthetic viruses.
The Revenge Effect
One of the most powerful concepts in the book is the revenge effect—when a technology meant to solve a problem creates new or worse ones. This term, originally coined by Edward Tenner, is given new weight here.
Suleyman’s examples:
Antibiotics led to resistant superbugs.
CFCs solved refrigeration but depleted the ozone layer.
Opioids were created for pain relief and sparked a global crisis.
The lesson: technologies don’t stay in the box they were designed for.
And with AI or synthetic biology, the consequences could be far more systemic. If a rogue agent—human or autonomous—misuses the tools, it may not be reversible. A bad line of code in a biofoundry might not cause a glitch. It might cause a global outbreak.
Why We Can’t Regulate Our Way Out—At Least Not Yet
Suleyman is blunt about institutional capacity: it’s insufficient. Governments are slow, underfunded, and often technologically outmatched.
Even well-intentioned regulatory bodies lack the expertise, speed, and jurisdiction to track global development across AI labs, startup accelerators, GitHub repositories, and gray-market biology labs.
He doesn’t say regulation is useless—but he challenges the assumption that it can be the first or only layer of control.
Instead, he paints a picture of what happens when our most powerful tools have no natural off switch and no global enforcement layer.
“What we are building is not just powerful—it is slippery. And we’re not ready for what happens when it escapes our hands.”
So Where Does That Leave Us?
This section sets up the central tension of the book: we are entering a phase of human history where the tools we’ve created are starting to exceed our capacity to contain them—technically, institutionally, culturally.
But Suleyman doesn’t end this section in despair. He simply wants to snap us out of our complacency.
He’s telling builders and leaders:
Don’t assume someone else is handling the downside.
Don’t assume good intentions will be enough.
Don’t assume scale and safety can be decoupled.
This is the reality check before the rest of the book begins laying out what we can do—how we might contain the wave, even if imperfectly.
4. The Two Fronts: Intelligence and Life
If the coming wave has two towering peaks, they are AI and synthetic biology.
That’s the heart of Chapters 4 and 5 in Suleyman’s book—his assertion that we are not facing one exponential technology, but two, each evolving rapidly and in ways that challenge not only our governance systems but our basic understanding of what it means to build, to control, and to be human.
“We are building machines that can think and tools that can reprogram life. And both are accelerating at once.”
Let’s take them one at a time.
Artificial Intelligence: The Technology of Intelligence
Suleyman knows this world intimately. He co-founded DeepMind in 2010, which went on to build some of the most powerful AI systems in the world: AlphaGo, AlphaZero, and AlphaFold.
In Chapter 4, he tells the story of watching an AI agent learn to play Breakout—not by instruction, but by learning entirely from scratch. At first, it just moved the paddle randomly. Then it started returning the ball. Then it learned to carve a hole in the wall and let the ball bounce endlessly behind it.
“There was no code for how to do this. The AI discovered it on its own.”
That was more than a cool gaming milestone. It was a signal. This wasn’t just automation—it was emergent strategy.
From there, DeepMind developed AlphaGo, which defeated world champion Lee Sedol in 2016, and later AlphaZero, which trained itself to master chess, Go, and shogi without human examples—just rules and reinforcement.
This, Suleyman says, is meta-technology:
AI is not just a product.
It is a system that builds systems.
It can write code, optimize factories, analyze laws, craft arguments, and simulate users.
“AI is the first general-purpose technology of intelligence itself.”
And we’re only in the early chapters.
Why This Matters for Builders
You don’t have to work in a frontier lab to feel this shift. Already, LLMs are being plugged into:
Business workflows (via copilots and agents)
Recruiting tools and hiring pipelines
Education, legal research, and healthcare triage
Political campaigning and influence operations
This means anyone with an API key and a few hours of prompt tuning can build something capable of reasoning, persuading, creating, and optimizing—at least in narrow contexts.
But as capabilities widen, so does risk.
“What happens when reasoning machines, optimized only for engagement or efficiency, start shaping the structure of society itself?”
That’s not just a UX question. It’s a civilization-level question. And it’s happening now.
Synthetic Biology: The Technology of Life
In Chapter 5, Suleyman switches to a quieter, less hype-filled—but equally radical—domain: synthetic biology.
We’ve heard of CRISPR. But what does it mean to program biology the way we program computers?
Suleyman walks us through the timeline:
In 1973, Cohen and Boyer splice together DNA from different organisms—the first recombinant gene.
In the 2000s, the Human Genome Project makes DNA legible at scale.
By the 2010s, tools like CRISPR-Cas9 make it editable—cheaply, precisely, and widely.
Now, biofoundries and labs can synthesize entire genomes or modify cells to produce drugs, food, fuels, or synthetic organisms.
This isn’t genetic engineering in the sci-fi sense. It’s real, and it’s commercial.
“Life has become programmable. And it is starting to scale.”
One of the most important concepts here is RNA programmability. Just as software is compiled to run on silicon, RNA can be programmed to instruct cells. The same mechanism that powered mRNA vaccines is now being used to imagine new ways of fighting cancer, aging, or creating living materials.
The Dual-Use Dilemma
Here’s the problem: every breakthrough has a shadow.
mRNA can deliver vaccines—or biological weapons.
LLMs can write poetry—or malware.
DNA synthesis can solve hunger—or build superbugs.
Suleyman stresses that synthetic biology is following the same curve as software—cheaper, faster, more modular, and more open. But biology has one property software doesn’t:
“Once released, a synthetic organism can self-replicate.”
That’s why he puts AI and biotech on the same level. They’re not just exponential. They are self-directed, scalable, and increasingly autonomous.
How They Converge
Though treated as separate domains, Suleyman argues these technologies are increasingly intertwined.
AI models are helping to simulate proteins, design molecules, and predict biological outcomes.
Synthetic organisms may one day be built to process data, sense environments, or interact with computers directly.
AI is already designing drugs, running labs, and writing code for DNA printers.
In short: intelligence is guiding biology, and biology is informing machines.
That’s not the future. It’s the active frontier.
For Builders: What’s the Call to Action?
If you’re working on LLM apps, healthcare AI, robotics, synthetic data, personalized medicine, or anything that touches intelligence or life—this is your wave.
And here’s what Suleyman would want you to ask:
Is what I’m building containable?
Is it safe by design?
Who else could use or abuse this?
What kind of guardrails or governance would I want if I didn’t trust myself?
Because if you don’t ask those questions now, someone else might ask them after it’s too late.
5. What Makes This Wave Different
You’ve probably heard the phrase “exponential technology” more times than you can count.
It’s usually a pitch line—something to excite investors or signal the inevitability of a product. But in The Coming Wave, Suleyman gives us something more specific and more sobering: four features that make this wave categorically different from anything we’ve seen before.
These are not buzzwords. They are warning signs—and they matter deeply for builders, because they explain why the standard playbooks for shipping, scaling, and governing are starting to break down.
“The coming wave has four unique qualities that make it nearly impossible to contain using traditional tools.”
Let’s walk through each one.
1. Asymmetry
In past revolutions—industrial, nuclear, even internet-scale—only governments or huge corporations had access to the tools of global impact.
That’s no longer true.
Now a single developer with access to an open-source LLM or CRISPR toolset can do what once required nation-states. Suleyman points to a striking case study from the early days of the Russia-Ukraine war:
A small Ukrainian drone unit, Aerorozvidka, used cheap, off-the-shelf drones to halt a 40-mile Russian convoy. They weren’t a formal military. They were engineers and hobbyists. And they changed the course of a battle.
This is asymmetry in action. Power has shifted—not just away from governments, but toward anyone with technical skills and ambition.
For builders, this means you have more leverage than ever—but so does every other actor, good or bad.
2. Hyper-Evolution
New tools aren’t just released once and updated occasionally. They are evolving constantly, often in ways their creators don’t fully understand.
This is especially true for AI systems:
LLMs train on user feedback and fine-tune themselves.
Agents are beginning to interact, plan, and learn in open environments.
Open-source models iterate daily on GitHub, Reddit, and Discord.
“We are no longer just designing systems—we are breeding them.”
Suleyman draws a sharp contrast with past technology cycles. A car engine doesn’t rewrite itself. A fridge doesn’t learn from its user. But AI systems do. And the more we push for autonomy, the faster this evolution accelerates.
If you’re shipping AI products today, the implication is this: you’re not just managing release cycles. You’re managing ecosystems.
3. Omni-Use
This is the “dual-use” problem, scaled up.
A model that can write compelling ad copy can also write extremist propaganda. A bio-agent that targets cancer can be repurposed to target ethnic traits. An agent trained for customer service can be deployed to manipulate political discourse.
“There is no clear line between civilian and military applications anymore.”
In the digital world, everything is remixable, forkable, and redeployable. That means intended use is almost irrelevant. What matters is access, intent, and imagination.
For founders and PMs, this is a design challenge and an ethical one. Do you know what your tool can become in the wild? If someone else had your stack, what would they build with it?
4. Autonomy
The final and perhaps most unsettling feature: these systems are beginning to operate with growing independence.
AI agents are already chaining tasks, accessing tools, writing and executing code, and interacting with external APIs. Synthetic biology is moving toward programmable cells that make decisions based on real-time signals.
“Autonomy is not about conscious machines. It’s about systems that operate without constant human control—and sometimes without human understanding.”
That’s a profound shift. We’re no longer talking about tools, but about partners, proxies, and potentially unpredictable actors.
This isn’t science fiction. If you've worked with autonomous agents, you’ve already seen the early signs: LLMs rerouting workflows, copilots pushing unintended outcomes, or models drifting over time.
Now imagine that power applied to finance, logistics, warfare, or biology—with the system writing its own instructions as it goes.
Why These Four Traits Matter Together
Each feature is powerful on its own. But together, they create a perfect storm:
Power is widespread (asymmetry),
Change is constant (hyper-evolution),
Intentions are ambiguous (omni-use),
and control is weakening (autonomy).
Suleyman's point isn’t that any one of these traits is catastrophic. It’s that they are mutually reinforcing. The more autonomy you give a system, the faster it evolves. The faster it evolves, the harder it is to contain. The harder it is to contain, the more likely it spreads to asymmetric actors. And so on.
That’s what makes the coming wave different.
It’s not just fast. It’s fundamentally ungovernable by old tools.
“This isn’t an arms race. It’s an escape velocity event.”
6. The Incentives Trap
You might be wondering: if the risks are this clear—superbugs, runaway AI, synthetic pathogens—why don’t we just slow down?
Why not pause, regulate, or self-limit?
Suleyman’s answer in Chapter 8 is brutal and familiar: because no one wants to be the first to stop.
This is the incentives trap. And it’s the most important idea in the book for anyone building or investing in tech.
“The logic of the coming wave is irresistible. Everyone has a reason to keep going. No one has a reason to stop.”
Let’s unpack that.
Everyone Has a Reason to Go Faster
Governments see AI and biotech as strategic assets. No one wants to fall behind in the next Cold War, whether it’s the U.S. vs. China or broader geopolitical blocs.
Corporations chase product and platform dominance. First-mover advantage still rules. In AI, the companies that train the largest models first get the best data, best customers, and best distribution.
Startups need investor traction and user growth. Building fast with open tools is often the only way to compete.
Open-source communities believe in access and decentralization. Many genuinely distrust centralized control.
Researchers are rewarded for publishing, not for slowing down. And there’s always another lab waiting to scoop a breakthrough.
Even well-meaning actors are caught in this trap. They don’t want to be reckless, but they also don’t want to be irrelevant.
It’s not that bad actors dominate. It’s that the system rewards speed and scale by default.
“There is no boardroom, cabinet, or lab where the incentive is to go slower.”
Game Theory, but Existential
This is classic prisoner’s dilemma logic. If I pause to verify safety, but my competitors don’t, I fall behind. If no one pauses, everyone may suffer—but from each player’s perspective, the short-term benefits outweigh the shared risk.
Suleyman calls this a “collective action trap,” and it echoes past examples:
Climate change: everyone benefits from slowing emissions, but no one wants to take the economic hit alone.
Nuclear arms: treaties took decades because no side wanted to unilaterally disarm.
Social media: platforms optimize for engagement even when it erodes public trust.
The difference? AI and biotech are faster. Cheaper. More distributed.
And unlike climate or nukes, these systems don’t need state-level actors to go wrong. A team of 5 engineers can fine-tune a dangerous model. A DIY lab can print synthetic DNA.
The AlphaGo Moment, Revisited
Suleyman returns to a key inflection point: when DeepMind’s AlphaGo defeated Lee Sedol in 2016.
This wasn’t just a technical milestone. It was a geopolitical one.
“In China, the match wasn’t about Go. It was about supremacy.”
After that, national investments in AI soared. China’s leadership declared AI a “national priority.” U.S. agencies, VC firms, and corporations followed suit. The global race was on.
This pattern repeats:
GPT-3 launches. Then comes GPT-4, Claude, Gemini.
Stable Diffusion goes open source. AI image models flood the market.
AlphaFold cracks proteins. Dozens of bio-AI labs rush to apply it.
Each step creates more incentive to keep going, more pressure to not be left behind.
Why This Matters for Builders
You might not be working on general-purpose AI or gene editing, but you’re still affected by this trap.
Maybe you're using models you don’t fully understand.
Maybe you're deploying tools to users who can’t predict the outcomes.
Maybe your competitors are forcing you to ship faster than you're comfortable with.
That’s the incentives trap at your level.
Suleyman doesn’t offer easy answers here. He just wants builders to see the water we’re swimming in—and understand that even responsible people are being pulled toward risky trajectories.
“This is not about bad actors. It is about good actors in a bad game.”
The challenge is not just technical. It’s systemic.
So What Can Be Done?
This chapter ends not with optimism, but with realism.
We’re not going to slow the wave with hope, good intentions, or Slack debates. The forces driving acceleration—competition, ambition, national interest—are powerful and persistent.
But that makes it even more urgent to design systems, incentives, and institutions that reward responsibility, transparency, and containment—not just scale.
That’s where the next section of the book turns: to the institutional cracks that widen as this wave accelerates.
7. A State of Institutional Failure
By now, Suleyman has made a compelling case that technology is advancing faster than we can control—and that almost no one has incentives to hit pause.
In Chapters 9 and 10, he shifts focus to another urgent issue: our institutions are not built for this moment.
“The very mechanisms we once relied on to manage risk—regulation, bureaucracy, treaties, checks and balances—are breaking under the strain.”
In other words, the systems meant to contain the wave are already leaking.
The Grand Bargain Is Cracking
Suleyman opens this section by revisiting the social contract of the nation-state. For the last few hundred years, governments have operated under a bargain: citizens give up some freedoms in exchange for safety, services, and stability.
We accept taxes, laws, and oversight.
In return, the state builds infrastructure, keeps peace, and regulates risk.
But what happens when the state can’t deliver on that promise anymore?
That’s not hypothetical. Suleyman points to recent stress tests:
COVID-19 revealed fragmented responses, slow supply chains, and outdated health infrastructures.
Cyberattacks like WannaCry and NotPetya shut down hospitals and logistics across multiple nations.
Trust in government, media, and scientific institutions continues to erode.
“We are watching the state’s capacity to contain risk fall behind the scale and speed of the threats.”
And into that void steps… tech.
Tech Companies as Micro-Sovereigns
In many ways, tech companies are now operating as sovereign-lite entities:
They set their own terms of service (effectively laws).
They issue currency (crypto, credits, points).
They control identity (logins, biometrics).
They influence borders (geo-blocking, infrastructure).
They shape public discourse (algorithms, moderation).
These firms control data, compute, talent, and platforms—all essential resources in the age of AI and synthetic biology. Governments increasingly rely on them to respond to crises, build public tools, and even define regulatory standards.
Suleyman isn’t demonizing tech here. He’s simply saying: power has shifted, and our political structures have not adapted.
“The most powerful institutions in the 21st century are no longer only governments. They are technical systems, shaped by engineers and governed by code.”
This should be a wake-up call for product leaders and founders: your roadmap may carry state-like consequences, even if you don’t think of it that way.
Fragility Amplifiers: When Things Go Wrong, They Go Wrong Fast
In Chapter 10, Suleyman introduces the idea of fragility amplifiers—events or systems that compound one another, creating cascading failure.
He gives real-world examples:
Cyberattacks that shut down hospitals or cripple shipping routes.
AI-generated misinformation that spreads faster than fact-checkers can respond.
Synthetic bio-leaks from poorly secured labs.
Labor automation that destabilizes economies faster than social systems can adjust.
These aren’t fringe scenarios. They’re previews. And they share a common feature: once they begin, they’re hard to stop.
“Our institutions were designed for linear risk, not compounding, exponential risk.”
In past decades, regulators could write laws that lasted 10 years. Now, an open-source model can go from prototype to global impact in weeks.
For Builders: What Happens When Governments Fall Behind?
If you’re building in a regulated space—healthcare, finance, education—you’ve probably felt the tension: move fast, stay compliant, but don’t expect your regulator to keep up.
What Suleyman is warning about is that mismatch going systemic.
In practical terms, this means:
You might be forced to self-regulate—or worse, guess.
Public backlash could land on you, even if you’re not the worst actor.
You may be pulled into political, legal, or geopolitical fights you didn’t start.
And in the absence of strong institutions, the burden of responsible innovation shifts downstream—to founders, designers, engineers, and investors.
That’s a lot to ask. But it’s the reality we’re heading into.
“In the absence of functioning gatekeepers, the builders become the last line of defense.”
What Comes Next?
These chapters lay the groundwork for the final pivot of the book. Suleyman has now mapped:
The runaway nature of exponential tech.
The structural traps of incentive and competition.
The failure of institutions to keep up.
So what do we do?
The final third of the book is his answer. A set of proposals—some familiar, some radical—on how we might build new forms of containment without halting progress or giving in to authoritarianism.
8. The Dilemma: Collapse vs Control
If the first two-thirds of The Coming Wave is a map of growing risk, the final third begins with a fork in the road. In Chapter 12, Suleyman distills the problem into its most difficult—and most important—choice:
“If we fail to contain the coming wave, we risk catastrophe. If we try to contain it too tightly, we risk tyranny.”
This is the core dilemma of the 21st century. And it’s not theoretical. It’s already unfolding, bit by bit, in how we respond to AI, synthetic biology, pandemics, surveillance tech, and information warfare.
Suleyman’s question is this: how do we thread the needle between chaos and control?
The Risk of Collapse
Collapse isn’t just sci-fi dystopia or Hollywood-style apocalypse. Suleyman defines it more realistically—and more chillingly—as loss of control at a critical point.
He offers examples:
A synthetic pathogen released—accidentally or deliberately—that spreads faster than governments can respond.
A highly capable AI system deployed too widely without safety constraints, manipulating financial markets or destabilizing political systems.
A misinformation campaign powered by personalized AI agents that floods public discourse ahead of an election, rendering democratic debate meaningless.
Each scenario is technically plausible. Each has precursors in recent history:
COVID-19 showed how unprepared we are for coordinated biological threats.
AI-generated misinformation and deepfakes have already swayed narratives in elections and wars.
Algorithmic trading has caused flash crashes we didn’t anticipate or understand in real time.
“Collapse doesn’t require a superintelligence. It only requires one unaligned actor and a fragile system.”
That’s the asymmetry at the heart of Suleyman’s argument. One person with the right tools can outscale the defenses of entire institutions.
The Risk of Control
But the alternative isn’t rosy either.
If the world wakes up to the scale of these risks—through an actual disaster or even a close call—it’s easy to imagine the swing toward heavy-handed containment:
Governments demanding real-time access to compute logs, DNA printers, or model weights.
Centralized surveillance of scientific research or cloud usage.
Social scoring systems or pre-crime algorithms to flag potential risks.
Global restrictions on knowledge sharing, enforced by a small group of actors.
This kind of control wouldn’t just slow innovation. It could undermine democracy, privacy, and autonomy in the name of safety.
Suleyman draws a clear line: containment must not become authoritarianism. But he also doesn’t dismiss the reality that some level of restriction is necessary.
That’s the paradox: we need constraint, but not domination.
The Hardest Tradeoff in Tech
For builders, this is a new kind of tension. Not the usual tradeoff between speed and polish, or between UX and revenue—but between freedom and stability.
Do you build a tool that could be misused, knowing the upside is real?
Do you open-source a model, knowing it might be abused by others?
Do you deploy agents or bio-tools before the edge cases are fully explored?
These aren’t abstract questions anymore. They are starting to show up in product meetings, pitch decks, and internal debates across AI labs, bio startups, and defense-tech orgs.
“The greatest danger of the coming wave is not just what we build. It’s how we respond when it goes wrong.”
That’s where builders have to think beyond business metrics. Because the choices we make at the edge of innovation set precedents for how society responds.
No Default Outcome
Suleyman makes one thing clear: there is no natural balancing act. There is no invisible hand ensuring things don’t go too far in either direction.
“Without deliberate effort, we will either drift toward collapse or be driven into control.”
That’s why this dilemma matters so much. It’s not just theoretical. It’s directional. And the decisions being made now—in boardrooms, labs, and open-source communities—will push us toward one pole or the other.
For Founders and Builders: What’s the Ask?
You’re not expected to solve global governance. But you are in a position of leverage.
How you frame your roadmap matters.
How you build safety into your tools matters.
How you shape your culture, your terms of service, your open-source policies—these all matter.
Because the real dilemma isn’t collapse versus control. It’s can we find a third path—deliberate containment, done wisely, done early, and done with input from everyone building the future?
That’s where Suleyman turns next: to the question of how containment might actually work, if we decide to try.
9. The Path Forward: Containment Must Be Possible
After laying out so many reasons to believe containment is impossible, Suleyman makes a surprising turn.
In Chapter 13, he insists that containment must be possible—not because it’s easy, or because we’re good at it, but because the alternative is so dangerous it’s unacceptable.
“The history of human civilization is not one of inevitability, but of decision. We’ve faced cascading threats before. We’ve bent the curve before. We must do it again.”
This section of the book is the bridge from diagnosis to action—from why we’re in trouble to what we can actually do about it. And while Suleyman doesn’t offer a silver bullet, he lays out a multi-layered strategy that echoes how we’ve managed other existential threats in the past.
It’s not about stopping the wave. It’s about building buffers, brakes, and cultural norms that slow its worst consequences and align it with human values.
Containment Is Not One Thing
Suleyman makes a key distinction: containment doesn’t mean halting innovation. It means shaping its trajectory.
He compares it to how we’ve managed:
Nuclear weapons through treaties, verification, and doctrine
Climate change through emission caps, green subsidies, and cultural shifts
Aviation safety through decades of regulation, transparency, and black box standards
Each of these was messy, contested, and often late—but eventually, coordination emerged.
“Containment is not a single policy. It’s a system—a mesh of norms, controls, incentives, and oversight.”
For tech builders, that means you don’t need a global treaty to get started. You need layered thinking and principled defaults.
Ten Steps Toward Containment
In Chapter 14, Suleyman proposes ten categories of action, designed to reinforce one another. They range from highly technical to deeply institutional.
Here’s a snapshot of his containment framework:
Technical Safety
Invest massively in safety research: interpretability, adversarial testing, red-teaming.
Fund work like Anthropic’s “constitutional AI” or OpenAI’s alignment research.
Secure-by-Design Infrastructure
Build labs and platforms that limit misuse by default: biosafety, access logs, compartmentalized models.
This echoes cybersecurity’s “zero trust” mindset.
Access Controls
Require credentials, licenses, or tiers of access for powerful models or gene-editing tools.
Think of it like pilot certification, but for synthetic biology.
Red Lines and Kill Switches
Define global boundaries—e.g., autonomous weapons, certain DNA sequences.
Build kill switches into high-risk systems, with shared oversight.
Institutional Capacity
Create agencies with real teeth and real talent: AI equivalents of the FDA or FAA.
Staff them with technical leaders, not generalists or lobbyists.
New Business Incentives
Reward safety, transparency, and long-term thinking—not just user growth and revenue.
Use grantmaking, public-private partnerships, or ethical investment vehicles.
Treaties and Norms
Push for global agreements like the Geneva Convention of Tech: imperfect, but essential.
Norms often precede laws—and bind faster.
Public Awareness and Debate
Treat this like climate: bring the public in early, often, and honestly.
Use media, education, and storytelling to surface tradeoffs—not just hype.
Technological Culture Shift
Embed values into engineering culture: responsibility, humility, caution.
Normalize saying “we don’t understand this yet” as a badge of maturity.
Global Coordination
This is the capstone: no one country, company, or team can do it alone.
Start with coalitions, multilateral initiatives, or shared disclosure protocols.
“We don’t need a perfect system. We need momentum in the right direction.”
Why This Approach Matters for Builders
Suleyman’s model doesn’t demand you become a policy expert. It asks you to build safety into the product stack, and to advocate for new defaults—especially if you’re early in a market or working with foundational tech.
For example:
If you’re training models, are you including evals for misuse or bias?
If you’re building agents, do they have override triggers or constraints?
If you’re open-sourcing, have you thought about red-teaming or abuse vectors?
And if you’re investing: are you asking teams what “safe scaling” looks like for them?
This is not just about compliance. It’s about designing for resilience before the backlash arrives.
Containment as a Movement, Not a Mandate
The big insight here is that containment isn’t one law or one org’s job. It’s a new design principle for an interconnected world.
Suleyman doesn’t pretend it will be easy. He admits we’re behind. But he believes that like past industrial revolutions, we can eventually layer enough rules, norms, and values to make the wave survivable—and even beneficial.
“We will not tame the coming wave by pretending it isn’t coming. We will contain it by meeting it, naming it, and shaping it—together.”
10. Takeaways for Builders, Founders, and Investors
You’ve made it this far, which means you’re not just curious—you care about what’s coming. So let’s zoom in and make this concrete.
What does The Coming Wave mean for the people building the future?
Whether you're an engineer shipping agent-based workflows, a founder training vertical models, or an investor evaluating synthetic biology startups, here are key takeaways from Suleyman’s book.
1. You’re Already In It
This isn’t just about frontier labs or defense departments. If you're building with AI, biotech, automation, or digital infrastructure—you’re part of the wave. Even basic applications can scale into systems that affect markets, users, and democratic norms.
The message: you’re not on the sidelines. You’re a primary actor.
“The most powerful institutions in the 21st century are no longer only governments. They are technical systems, shaped by engineers and governed by code.”
2. Build for Containment, Not Just Scale
Safety, alignment, monitoring, and misuse prevention aren’t add-ons. They need to be part of your product design, API documentation, and team culture. Defaulting to open-source or full access isn’t neutral—it’s a decision.
Ask:
How could this be abused?
What happens at 10x or 100x scale?
What signals would tell us things are going wrong?
Containment is a posture, not a PR play.
3. Don’t Wait for the Regulator
Suleyman’s chapters on institutional failure are a wake-up call. Governments move slowly, and you’ll likely be out ahead of them for years.
That’s not license to move recklessly. It’s a responsibility to self-regulate, design proactively, and contribute to norms.
If you’re in a leadership role, build internal review boards. If you’re early-stage, make ethical constraints part of your moat. If you’re investing, ask the hard questions now—before the damage is public.
4. Normalize Uncertainty and Humility
One of the book’s quietest but strongest arguments is cultural: we need to shift from “move fast and break things” to “move thoughtfully and monitor continuously.”
That doesn't mean fear-based stagnation. It means knowing what you're releasing, tracking how it's used, and accepting that you don’t fully understand what the system might become.
It also means listening outside the echo chamber—especially to users, ethicists, civic actors, and people affected by your tools.
“Containment is not a limit on innovation. It is what makes innovation safe to scale.”
5. The Best Builders Will Be Containment-Native
There’s a strategic upside here. As users, regulators, and institutions catch up to the risks, containment-native companies will have the most durable advantage.
That means:
Trust from customers.
Reduced reputational risk.
Better long-term unit economics.
Less friction when policies shift.
Think of it like carbon-neutral infrastructure or secure-by-design software. It looks slow at first, but it wins over time.
11. Final Thoughts: Hopeful, Not Naive
When you first hear the premise of The Coming Wave, it might sound bleak—AI out of control, synthetic biology leaking into the wild, institutions too slow to respond.
But Suleyman’s book isn’t fatalistic. It’s urgent, yes. But also pragmatic. And ultimately, hopeful—not because the future is safe, but because we still have a choice.
“The coming wave cannot be stopped. But it can be contained. If we start now.”
That’s the thread running through every chapter: if we take the risks seriously, act early, and design responsibly, we can shape the trajectory of the technologies we unleash.
And if we don’t?
Then the future won’t be defined by what we build—but by what breaks.
You won’t find all the answers in this book. But you will find the right questions. And if you’re serious about building for the long term, that’s the best place to start.