The Right Way to Get It Wrong
Inside Right Kind of Wrong and the Playbook for Learning from the Mistakes That Actually Matter
This post is a deep dive of book Right Kind of Wrong: The Science of Failing Well, the winner of the Financial Times and Schroders Business Book of the Year 2023, and a Behavioral Scientist Notable Book of 2023.
Section I: Introduction — What if We’ve Been Taught to Fear the Wrong Kind of Failure?
Let’s be honest—“fail fast” has become a bit of a punchline.
We put it on t-shirts, sprinkle it into investor decks, and quote it in retros. It sounds bold. Anti-fragile. Growth-minded. But most of the time, when something actually goes wrong—really wrong—people don’t high-five each other and run a thoughtful experiment write-up. They go silent. They blame. They panic. They spin the narrative.
So here’s the uncomfortable truth: most teams don’t know how to fail well.
And that’s where Right Kind of Wrong by Amy Edmondson comes in—not to make failure glamorous, but to make it useful.
Edmondson, a Harvard Business School professor known for pioneering the concept of psychological safety, argues that our approach to failure is broken because we treat it as one big, messy category. “Failure,” she writes, “is not a monolith. It comes in many varieties, some of which are praiseworthy and others not so much.”
That’s the core of the book: a bold reframe that says not all failures are created equal—and only one kind deserves to be encouraged.
In fact, Edmondson gives us a simple but powerful map. She breaks failure down into three types:
Basic Failure – avoidable mistakes in familiar territory
Complex Failure – unpredictable breakdowns in interconnected systems
Intelligent Failure – thoughtful, low-cost experiments in new territory
It’s that third kind—the “right kind of wrong”—that Edmondson wants us to seek out, embrace, and learn from.
This isn’t just a semantic game. In a world defined by uncertainty, the ability to recognize and respond to different kinds of failure is a competitive edge. As she puts it:
“Failure is the unavoidable price of learning and discovery. But only some failures offer a good return on that investment.”
If you’re building a product, running a team, launching a new strategy—this matters. A lot. Because no matter how smart or careful you are, things will go wrong. And how you handle those moments will determine whether your company grows, stagnates, or breaks apart.
This book isn’t the first to tell us that failure is part of innovation. But it might be the first to offer a clear framework, tested tools, and grounded stories for how to actually put that into practice—without the usual tech-world bravado or self-help fluff.
Here’s what we’ll explore in this post:
Why the way we talk about failure is broken—and how Edmondson reframes it
A deep dive into the three types of failure (basic, complex, intelligent)
Why smart people struggle to learn from failure—and how to fix it
How psychological safety turns mistakes into momentum
What the best organizations do to build “fail-well” cultures
Tools and mindsets you can start using right now
How this book compares to other business and personal growth classics
Because failing forward isn’t a slogan. It’s a skill. And it’s one we all need to get better at.
Section II: The Problem With How We Talk About Failure
We love to say that failure is good.
In pitch decks and founder interviews, in Medium posts and Monday standups, it’s almost a badge of honor: “We learned so much from that failed launch.” “Fail fast.” “Failure is just feedback.”
But let’s be real. Most of the time, when things go wrong—especially in public, especially at work—failure doesn’t feel like feedback. It feels like fear. It feels like embarrassment, blame, or silence. The gap between the slogan of failure and the experience of failure is wide. And Amy Edmondson is here to close it.
She argues that the problem isn’t failure itself. It’s that we don’t actually know what kind of failure we’re dealing with.
We treat all failure as one thing. And that’s the trap.
In her words:
“We need to stop talking about failure as if it's a monolithic concept. Some failures are worth celebrating. Others are deeply regrettable. Lumping them together is not only confusing—it’s dangerous.”
This is where Right Kind of Wrong makes its first big contribution: Edmondson introduces a taxonomy of failure that forces us to get specific. Not all failures deserve applause. In fact, most don’t. But some do—because they are structured, intentional, and designed to generate insight.
It’s a critical distinction. One that The Lean Startup touched on, but never fully unpacked. Eric Ries encouraged “validated learning” through rapid iteration, but he didn’t offer much help for when those experiments went sideways. Edmondson picks up where that playbook left off.
To make this real, let’s take two examples.
First, a basic failure: a software engineer accidentally pushes untested code to production. It breaks a key feature. Users churn. Revenue dips. This was a routine task with a clear protocol—someone skipped a step. This isn’t an “awesome learning moment.” It’s a miss. Avoidable and costly.
Now contrast that with an intelligent failure: a product team hypothesizes that users might prefer voice interactions over tapping, and builds a lightweight prototype to test the concept. The result? No traction. No click-throughs. But they gather qualitative feedback that points to a completely different unmet need—around multitasking in quiet environments. That failed experiment becomes the seed for a new product feature that ends up sticking.
Same outward result—something didn’t work. But very different types of failure. One costs you. The other teaches you.
And what about complex failures?
These are the ones that sneak up on you—not because anyone did something catastrophically wrong, but because lots of small things added up. Edmondson shares the story of the Torrey Canyon, a massive oil tanker whose grounding off the British coast in 1967 caused one of the worst environmental disasters in UK history.
Captain Pastrengo Rugiati made a series of decisions under pressure—navigating in fog, trying to avoid lobster boats, relying on outdated steering controls. None of them, on their own, would have caused a disaster. But together? Catastrophic failure.
What’s important here is that the system didn’t give him space to fail safely. As Edmondson puts it,
“We want someone to blame, but complex failures are rarely the fault of a single individual. They’re the result of weak signals, unclear incentives, and poor feedback loops.”
In other words, culture and systems matter as much as individual decisions.
So why does this classification matter so much?
Because when teams don’t know what kind of failure they’re dealing with, they fall into predictable traps:
They punish intelligent failure and discourage risk-taking.
They ignore basic failure and let sloppiness persist.
They scapegoat in complex failure scenarios instead of fixing systemic issues.
It’s not enough to say “we value failure” in a values deck. What Edmondson challenges us to do is ask: Which kinds of failure are we rewarding—and which ones are quietly killing our performance?
This is the shift: stop lumping all failure together. Start learning how to fail on purpose.
Section III: Three Types of Failure — and Why Only One Deserves a Trophy
Amy Edmondson’s most practical contribution in Right Kind of Wrong is also her simplest: not all failures are created equal.
In fact, most failures aren’t worth celebrating. Some are wasteful. Some are tragic. But one kind—the intelligent kind—is not only forgivable, it’s essential.
Let’s walk through her three-part framework.
1. Basic Failure: The Avoidable Kind
This is the kind of failure most of us fear—and for good reason. It’s the “we knew better but messed it up anyway” variety. A task is routine, the knowledge is available, the process is known—but someone skips a step, miscommunicates, or just drops the ball.
Think of an airline mechanic forgetting to tighten a bolt. A junior PM missing a critical update in a release checklist. A restaurant undercooking chicken on a rush night. These are preventable mistakes, not learning opportunities.
As Edmondson puts it:
“Basic failures stem from deviations from known procedures or inadequate attention to detail. They are not the price of progress. They’re the cost of inattention.”
The lesson here is straightforward: build systems and training to reduce basic failure. They’re not noble, they’re just costly.
2. Complex Failure: The System Crashes
Complex failures happen when multiple things go slightly wrong at once—and the system isn’t resilient enough to catch them.
One of the most vivid examples Edmondson gives is the Torrey Canyon disaster. The oil tanker was under pressure to make up for lost time. The captain bypassed standard routes to meet a deadline. Steering issues, visibility constraints, and outdated controls all compounded—and the ship struck a reef.
This wasn’t a case of blatant negligence. It was normal work under abnormal pressure, in a fragile system.
In tech, you might see this in multi-service outages where latency, a bad config file, and a half-migrated database collide. No one person “caused” the failure—but the org wasn’t prepared to respond.
Complex failures teach us the importance of:
Redundancy
Feedback loops
Open reporting of weak signals
And they demand that we resist the urge to blame a single actor when the system was the problem.
3. Intelligent Failure: The Kind You Want More Of
Here’s the heart of the book. Intelligent failures happen when:
You’re operating in new territory
You’ve formulated a clear hypothesis
You’ve designed for low-cost learning
You can quickly extract insights
This is the kind of failure that powers scientific breakthroughs, business innovation, and personal growth.
Edmondson offers a wonderful example from Brighton College, where a group of teens worked on solving “avocado hand”—a surprisingly common kitchen injury. They went through multiple prototypes of a safe-slicing tool. Early versions failed, but they kept iterating. Their final design won awards and drew commercial interest.
Every step along the way involved small, safe failures. And every failure gave them more insight into user needs and functional design.
This mindset is deeply aligned with product experimentation. But what Right Kind of Wrong adds is the emotional and organizational context. It’s not just about testing ideas. It’s about making failure safe, intentional, and something to be talked about openly.
Edmondson writes:
“The right kind of wrong is not accidental. It’s built on disciplined curiosity.”
In this sense, intelligent failure is more than a tactic. It’s a leadership philosophy.
If you're leading a startup, a product team, or an R&D group, this framework should be your default language. Before a new project or risky bet, ask: Are we operating in familiar territory or unknown ground? If it fails, will it be basic, complex, or intelligent?
Because the goal isn’t to avoid failure. It’s to design for the kind that teaches you the most, at the lowest cost.
Section IV: Why We Struggle to Learn From Failure
By now, the logic is clear: some failures are worth having. Intelligent failure, in particular, is the price of innovation. So why don’t more people—and more teams—lean into it?
Amy Edmondson’s answer is both obvious and deeply human: we get in our own way.
The biggest obstacle to learning from failure isn’t a lack of process or time or data.
It’s ego.
We like to think we’re rational beings, especially in professional settings. But when we fail—especially publicly—our instinct is rarely curiosity. It’s self-protection. We rewrite the story. We downplay the mistake. We point fingers. Or we go quiet and hope it blows over.
Edmondson points out that this is rooted in biology as much as behavior. Admitting fault is hardwired to feel threatening. Our brains perceive social rejection—being seen as incompetent or wrong—as a real danger. So we dodge it.
She writes:
“Rather than staying open to the idea that a failure might reveal something valuable, our default response is to close down, to deflect, or to blame.”
This is why saying “fail fast” isn’t enough. Learning from failure takes emotional discipline.
One of the most memorable examples in the book comes from Johannes Haushofer, a professor at Princeton who went viral for publishing his “CV of Failures.” Instead of just listing academic awards and accomplishments, he published a list of all the fellowships, jobs, and papers he didn’t get.
It was disarming. Vulnerable. And incredibly powerful.
As Haushofer explained, “Most of what I try fails, but these failures are often invisible, while the successes are visible. This gives others the false impression that most things work out for me.”
What made the post resonate wasn’t the content—it was the honesty. By opening up about his misses, Haushofer modeled something Edmondson emphasizes throughout the book: psychological safety begins with leaders showing fallibility.
There’s a second story Edmondson shares that’s worth mentioning—one that brings a different kind of rigor to failure.
After a string of bad dates, tech futurist Amy Webb did something unusual: she made a spreadsheet. She reverse-engineered the matching algorithms of online dating platforms, identified patterns, and tested new strategies. Her process was scientific—and her early missteps were reframed as data, not disasters.
That project led to her bestselling book Data, A Love Story. It’s a classic case of intelligent failure applied not in a lab or boardroom—but in a deeply personal space.
It also shows something Edmondson wants us to remember: failing well isn’t about what domain you’re in. It’s about how you frame the experience.
And here’s where the book gets especially relevant for founders and leaders: we’re worse at learning from our own failures than from other people’s.
Studies cited in the book show that when people observe others fail, they’re more analytical and curious. When it’s their failure? Emotions override analysis. They shut down.
That’s why Edmondson argues that failure can’t just be a personal virtue—it needs to be a team habit. An expected, structured part of how we work and reflect together.
If you’ve ever run a postmortem where no one wanted to speak first—or a retro that felt like group therapy—you’ve felt this dynamic. Without psychological safety, failure conversations become blame games. Or worse, PR spin.
So how do we shift that?
Edmondson points to the power of language and norms. Teams that regularly ask questions like:
“What surprised us?”
“What’s something we didn’t expect?”
“What’s a mistake we’re glad we caught early?”
build space for reflection without shame.
And leaders who admit what they don’t know—or what they got wrong—set the tone. Not by saying “it’s okay to fail,” but by showing it’s okay to be wrong and learn out loud.
The big idea here is that learning from failure isn’t intuitive—it’s designed.
We have to create conditions where honest reflection is not just allowed, but expected. Where small failures are surfaced early. And where intelligent risk-taking doesn’t come at the cost of reputation.
Because if we don’t do that, we fall into the same trap: saying we value learning, but punishing the very behaviors that make it possible.
Section V: Psychological Safety — The Hidden Engine of Innovation
If there’s one term Amy Edmondson is most famous for, it’s this: psychological safety.
She coined it. She’s studied it for decades. And in Right Kind of Wrong, she makes the case that it’s not just a nice-to-have—it’s the single most important condition for turning failure into progress.
So what is it?
Psychological safety means that in a team or group, people feel safe to speak up. To ask questions. To admit mistakes. To share ideas that aren’t fully formed yet. It’s not about comfort—it’s about freedom from interpersonal fear.
And without it? Failure becomes dangerous. People hide what went wrong. Weak signals get buried. Small problems snowball into disasters.
“When people fear being punished or humiliated for speaking up, they keep quiet,” Edmondson writes. “And when they keep quiet, organizations stop learning.”
This is where failure culture often breaks down—not in the big decisions, but in the micro-moments where someone notices something, hesitates, and says nothing.
One of the most sobering examples Edmondson discusses comes from NASA’s Columbia shuttle disaster in 2003.
Some engineers had noticed foam debris hitting the shuttle’s wing during launch—an event that would later prove catastrophic. But they didn’t feel empowered to escalate their concerns. The organization’s culture, still shaped by hierarchy and fear of being wrong, silenced the signals that could have saved lives.
This wasn’t a failure of data. It was a failure of dialogue.
After Columbia, NASA changed. It reorganized around early warnings. It encouraged dissent. It made room for technical voices to speak clearly and without shame. It began to practice what Edmondson calls “failing smart.”
So how do you create psychological safety?
It starts with leadership—but not the performative kind. You don’t build trust by declaring an open-door policy. You build it by modeling vulnerability, consistently and deliberately.
That means saying things like:
“I don’t know—what do you think?”
“I was wrong about that last week.”
“That was a risk worth taking. What did we learn?”
These are small phrases. But in a high-stakes environment, they signal everything.
They tell your team: this is a place where truth matters more than ego. Where experiments are expected. Where failing intelligently isn’t punished—it’s respected.
Edmondson connects this to Google’s Project Aristotle, a massive internal study on what made some teams better than others.
The conclusion surprised Google’s data-driven culture: it wasn’t talent or IQ or seniority that made teams high-performing. It was psychological safety. The best teams were the ones where people felt safe to speak, to challenge, and to fail visibly.
This is a powerful reminder for anyone leading a product team, a startup, or a classroom: innovation thrives on trust, not perfection.
Without it, you get silence. With it, you get iteration, reflection, and growth.
And psychological safety isn’t just for technical teams. In The Fearless Organization—Edmondson’s earlier book—she shared stories of nurses who stayed quiet about potential errors, employees who watched unethical decisions unfold but said nothing, and engineers who second-guessed risks but felt unsupported.
In each case, the culture made failure unsafe. So it became hidden. And that’s when it becomes dangerous.
The takeaway is simple: if you want to fail well, you have to normalize the act of failing out loud.
Create a space where someone saying “I might have missed something” isn’t met with blame, but with curiosity. Where experiments are shared—not just when they succeed, but when they teach.
Because only in those environments can intelligent failure become a flywheel for innovation.
Section VI: How to Build a Culture That Fails Well
At this point in the book, Edmondson shifts gears from diagnosis to design. We’ve looked at why people fear failure, why most teams mishandle it, and what psychological safety makes possible.
Now the question becomes: how do you actually build a culture where intelligent failure is welcomed, not punished?
It turns out, there are signals everywhere. And the best organizations don’t just say they value learning from mistakes—they design rituals, incentives, and feedback loops to prove it.
Let’s start with one of my favorite examples from the book: Eli Lilly’s failure parties.
In its R&D division, the pharmaceutical company began hosting events where teams presented failed drug trials—not as shameful defeats, but as critical contributions to science. The logic was simple: the faster you learn that a compound won’t work, the faster you can redirect resources toward the ones that might.
And instead of burying the result in a drawer, they built a norm of celebrating well-run experiments—even when the result was null.
It’s a powerful reframing. Failure wasn’t evidence of waste. It was proof of discipline.
Another story comes from Grey Advertising, where leadership created a “Heroic Failure” award. It recognized bold initiatives that didn’t pan out—but were executed with clarity, rigor, and ambition.
By surfacing those moments, Grey sent a clear message: We don’t only reward wins—we reward thoughtful risk.
These aren’t gimmicks. They’re cultural levers. They reduce fear. They shift incentives. And most importantly, they give teams language for talking about failure in public, productive ways.
You don’t have to throw a party every time a test doesn’t pan out. But you do need rituals that make reflection and transparency the default.
Edmondson outlines several practical tools, including:
1. Pre-mortems:
Before launching a new initiative, ask the team: “Imagine we’re six months in and this failed. What went wrong?” This opens space for quiet doubts, potential blind spots, and mitigations—before risk turns into regret.
2. Post-mortems (or post-project reviews):
But with a twist: focus not just on what went wrong, but on what kind of failure it was. Was it basic, complex, or intelligent? What do we want to repeat? What do we want to design against next time?
3. Red Teaming:
Appoint someone (or a sub-group) to challenge assumptions on purpose. Their job is to make the strongest possible case against the current plan. This invites dissent before reality forces it.
4. Regular “Learning Reviews”:
Make time every month or quarter to surface a “failure worth discussing.” Normalize shared reflection. Reward the team that learned something big—even if the test itself didn’t succeed.
These practices do more than process failure—they reposition it.
They create what Edmondson calls a “learning loop”, where action leads to reflection, which leads to better-informed action. The loop only works if it’s visible, supported, and shared.
And this is where Right Kind of Wrong really levels up from other innovation books.
The Lean Startup popularized the idea of rapid iteration—but said little about how to help teams emotionally survive and culturally support that process. Edmondson fills that gap. She reminds us that behind every pivot is a team asking: “Are we still safe to speak honestly?”
So if you’re leading a team—or even just managing yourself—ask:
What stories do we tell about past failures?
What rituals make space for learning?
Do we only reward clean wins, or also celebrate clean losses that taught us something?
Because the goal isn’t to glamorize failure. It’s to get smarter, faster—together.
Section VII: Practicing the Right Kind of Wrong
If there’s one idea Amy Edmondson returns to again and again, it’s this: failing well is not a personality trait. It’s a practice.
And like any good practice, it’s not something you master overnight—or apply only in dramatic moments. It’s something you do every day, in small decisions, with curiosity and intention.
So what does that actually look like?
First, it starts with how you frame failure before it even happens.
Too often, we go into a project or idea hoping it works—and bracing emotionally if it doesn’t. But Edmondson argues that the better mindset is this: go in expecting to learn. Treat failure not as an exception, but as a potential outcome that can still be valuable.
This is what separates intelligent failure from disappointment. You’re not reacting—you’re designing for discovery.
One simple question to ask upfront:
“What hypothesis are we testing here?”
That one move—framing a goal as a test—turns success or failure into insight. It’s what scientists do. It’s what smart product teams do. And it’s what we can all do more often.
Second, it requires self-awareness—especially when things go wrong.
Edmondson doesn’t sugarcoat this part. Our first instinct after failure is rarely curiosity. It’s usually embarrassment, defensiveness, or shame.
So to fail well, we need to slow down the internal narrative.
Ask:
What was I trying to do?
What did I expect to happen?
What surprised me?
What can I take from this, even if it stings?
She writes,
“Failing well starts with noticing what happened, without rushing to judge yourself or others.”
In other words: don’t confuse failure with identity. Learn from the event—but don’t internalize it as truth about who you are.
This mirrors Carol Dweck’s work on growth mindset, which Edmondson indirectly builds on. A fixed mindset says: “I failed, therefore I’m not good at this.” A growth mindset says: “That didn’t work—what does that teach me?”
Third, it requires systems thinking—especially in teams.
Sometimes we blame ourselves for things that are actually structural. And sometimes we let systems off the hook because we don’t zoom out far enough to see the patterns.
Edmondson’s case studies are full of this insight. One example is the Torrey Canyon spill again—where a captain under extreme pressure made a series of decisions that, in hindsight, look reckless. But in context? He was operating within flawed systems, weak feedback loops, and high-pressure incentives.
This is a crucial habit of failing well: ask not just “What did I miss?” but “What about the system made this outcome possible?”
Blame doesn’t build resilience. But analysis does.
One of the most grounded examples of this kind of mindset shift comes from Ray Dalio, founder of Bridgewater Associates.
In the early 1980s, Dalio made a big economic prediction—that the U.S. economy would crash following a credit crunch. He was wrong. Spectacularly wrong. The crash never came, and his fund nearly collapsed.
Instead of walking away, he used the failure as a pivot point. Dalio built a new system around radical transparency and continuous learning—eventually formalized in Principles, his company’s operating philosophy.
What changed? He started asking himself, “How do I know I’m right?” before making decisions. And he designed feedback loops that made it easier to catch blind spots early.
That’s intelligent failure at work. It’s not glamorous. But it’s powerful.
So how do you practice the right kind of wrong?
Edmondson leaves us with a few simple disciplines:
Test new ideas with purpose. Be clear about the hypothesis.
Reflect without judgment. Write down one failure a week—and what it taught you.
Talk about it. The moment you make failure visible, it loses its power over you.
Zoom out. Look for system patterns—not just personal missteps.
Reinforce the loop. If something failed well, celebrate the insight it gave you.
Because failing well isn’t just about avoiding catastrophe. It’s about building resilience. You become someone who’s not afraid to try, because you trust yourself to learn no matter what happens.
Section VIII: Why This Book Stands Out — and Who It’s For
Let’s be real. There’s no shortage of books that tell us to “embrace failure.”
Some preach hustle culture. Others focus on grit and bounce-back stories. And a few, like The Lean Startup or Thinking in Bets, get into experimentation and decision-making under uncertainty.
So what makes Right Kind of Wrong different?
Three things stand out.
First, Edmondson gives us a vocabulary.
Where other books say “learn from your mistakes,” she breaks it down into basic, complex, and intelligent failure—a simple yet powerful framework that helps you analyze what actually happened and how to respond. That taxonomy alone is worth the price of admission. It’s actionable, memorable, and repeatable—whether you’re debriefing a failed launch or reflecting on a personal misstep.
Most of us lump all failure into one emotional bucket. Edmondson helps us sort it—so we can stop reacting and start learning.
Second, she connects failure to team culture with uncommon depth.
Plenty of books talk about resilience or grit from an individual perspective. But Edmondson makes it clear: you can’t fail well in isolation. You need psychological safety. You need leaders who model vulnerability. You need rituals and systems that make learning visible and normalized.
This builds on her earlier work (The Fearless Organization), but Right Kind of Wrong goes further—it connects the dots between culture, learning, experimentation, and innovation.
It’s a leadership book. A product book. And a human book.
Third, she doesn’t romanticize failure.
There’s no “fail faster, hustle harder” energy here. Edmondson is a researcher, not a hype artist. She respects the cost of failure—especially for those in high-risk environments like medicine, aerospace, or finance.
But she also makes the case that failure is inevitable—and ignoring that fact is far riskier.
As she writes:
“The most dangerous failures are not the ones we talk about. They’re the ones we never see coming—because no one felt safe enough to raise their hand.”
That kind of insight sticks. And it elevates the book beyond the usual “resilience porn” into something truly thoughtful and useful.
So who should read this book?
If you’re a founder, you need this to build a culture that survives pivots and setbacks.
If you’re a product leader or PM, it’ll help you turn retros into engines of learning—not blame.
If you’re an educator, coach, or manager, you’ll walk away with language and rituals that help your people grow faster and recover better.
If you’re a builder of any kind—startup, studio, side project, school—this book will sharpen your thinking, upgrade your feedback loops, and strengthen your team dynamics.
It’s not a hype book. It’s a habits book. The kind you highlight, quote in a team doc, and return to after a hard week.
Section IX: Takeaways and Call to Action
So here we are.
We’ve talked about why failure is so hard to deal with. We’ve looked at Edmondson’s three-part framework. We’ve seen how ego and culture can block learning, and how psychological safety opens the door to progress. And we’ve heard real stories—from oil tankers to dating apps to hedge fund collapses—that show what failing well actually looks like in practice.
But let’s make this even more useful.
Here are a few simple things you can do this week to start practicing the right kind of wrong:
1. Run a “Failure Inventory.”
Pick a recent initiative that didn’t go to plan. Ask yourself:
Was this a basic, complex, or intelligent failure?
What did I learn?
What could I share with my team?
Just putting language to the failure helps you stop personalizing it—and start extracting value from it.
2. Try a Pre-Mortem.
Before your next project kickoff, set aside 15 minutes. Ask the team:
“Assume this totally fails—what went wrong?”
It’s simple. And it surfaces weak signals before they become expensive problems.
3. Talk About One Failure Openly.
In a Slack thread, a team check-in, or a coffee chat, share one intelligent failure from your recent work—and what it taught you. Keep it brief and clear. You’ll be surprised how fast others follow suit.
4. Shift Your Language.
Start replacing “what went wrong?” with:
“What did we learn?”
“What did we expect—and what surprised us?”
“Was this a failure worth having?”
It takes the sting out of the conversation—and turns it into insight.
5. Make Space for Reflection.
Block 20 minutes at the end of your week to write down one thing that didn’t work—and what kind of failure it was. Basic? Complex? Intelligent? Even the act of labeling builds your failure fluency.
Because here’s the truth: if you’re building something ambitious, you’re going to get things wrong. You’re going to test ideas that fall flat, say things you wish you hadn’t, make bets that don’t pay off.
The question isn’t whether you’ll fail. It’s how you’ll respond—and what you’ll build in the process.
As Amy Edmondson writes:
“Failing well is not the opposite of success. It’s how success happens.”
So go build something. Take a smart risk. Run the experiment.
And when it doesn’t go how you planned? Don’t bury it.
Name it. Share it. Learn from it.
That’s how you fail well—and move forward.