November 27, 2023

Failing well: Learning from failure in healthcare

By Sergio Zanotti, MD, FCCM 

Our relationship to failure in healthcare is broken. As clinicians, we hold deeply ingrained convictions regarding failure that, unfortunately, are incorrect and make it difficult for us to learn from it when it does occur. 

In my recent conversation with Amy Edmondson, Novartis Professor of Leadership and Management at the Harvard Business School and author of “Right Kind of Wrong: The Science of Failing Well,” we explore the ways in which we’re taught that failure is bad. We’re told that failure and high standards cannot coexist, and — perhaps most damaging — that with failure, someone is always to blame. In addition to these erroneous beliefs, how often do we conclude that we simply must try harder in order not to fail? 

We benefit when we think differently about failure — especially in healthcare. How we do that is by embracing what Edmondson calls a “failing well” mindset. 

 

What does it mean to fail well? 

To fail well is to have a healthy relationship with failure. It starts with a clear understanding of the different types of failure, and it requires us to be deliberate and systematic in preventing as many of those failures as possible. When we do fail, we learn from it without blame and in an environment that fosters psychological safety — the shared belief that a team is safe for interpersonal risk-taking. 

Failure is guaranteed to occur in our clinical practice, and we owe it to our patients to learn how to fail well. 

 

The three types of failures 

A failure is when an outcome deviates from the desired results. An error is an unintended deviation from pre-specified standards. And a violation is an intentional deviation from the rules. Our interest in healthcare is in failures and errors. 

The first step in developing a failing-well mindset is understanding that not all failures are bad, blameworthy, or created equal. There are three types of failures: basic failures, complex failures, and intelligent failures. 

Basic failures 

Basic failures occur in known territory, or within routine operations. These failures happen when we have the requisite knowledge to do it right, and tend to have a single cause and be unintentional. The amputation of the wrong lower extremity is an example of a basic failure with devastating consequences. Basic failures are often driven by human factors such as inattention, faulty assumptions, neglect, and overconfidence. When we maximize our efforts and systems to prevent basic failures, we are failing well. 

Checklists, commonly used in aviation and now in healthcare, are intended to codify known processes and prevent basic failures. In healthcare, our checklists look like electronic medical records, validating the name and birthdate of our patients, and the like – innovations from basic failures of the past that help us prevent them in the future.  

Complex failures 

While complex failures occur in a familiar setting, it’s usually a complicated environment with unexpected interactions. Complex failures typically happen as the result of multiple interconnected events that interact in a nonlinear manner. Small initial problems can lead to disproportionately catastrophic failure. A massive oil spill, a space shuttle explosion, and the collapse of a large building all are examples of complex failures. 

In healthcare, complex failures are widespread, especially in environments such as the intensive care unit, operating room, emergency department, and inpatient care settings. We often use the Swiss cheese theory to explain the occurrence of complex failures in healthcare. Imagine that a slice of Swiss cheese represents one step in a patient’s trajectory. Now imagine a stack of slices with holes that more or less line up in one area. That series of overlapping holes and layers now represents the pathway for complex failure to occur. But represented in each slice of cheese is an opportunity to avert disaster, one step at a time. 

When complex failures do occur, the onus is on us to implement changes in our systems to prevent them from happening again.  

Intelligent failures 

Finally, we have intelligent failures. As their name suggests, these failures are ones we want and even celebrate. They occur in new or unknown territory and are driven by the opportunity to advance our understanding of a known problem. These experiments or pilots are intentionally designed to be contained yet capable of providing us useful lessons from which we can innovate.  

Intelligent failures are common in basic science, pharmaceutical development, and in developing new processes for care in unknown territory — such as navigating a pandemic due to a novel infectious disease. 

 

Putting failures to good use 

The science of failing well is a rapidly growing field. Improving our troubled relationship with failure in medicine is critical to our ability to bring better care to the bedside. And failing well requires change at the individual, team, and system levels. 

Individual level 

Failing well starts with developing a keen sense of self-awareness. It’s about changing how we frame failure and learning to embrace failure as an opportunity to grow. We must learn to think like scientists, and infuse ourselves with a healthy dose of humility (What I know is far less than what I do not know) and doubt (How do I know I am right?). 

Self-awareness is likely to lead to genuine curiosity and discovery. In this new dynamic, when we fail, we respond with healthy cognitive habits. A practical mental framework to apply in these situations is “stop-challenge-choose.” After we identify an error, the first step is to stop and reflect on what happened. Often, we’re flooded with our own negative stories around the failure, and at this point, it’s good to challenge these stories and seek a clearer understanding of the situation and what led to the failure.  

Finally, we must choose the best next steps to minimize the impact of failure and maximize our ability to learn from the situation.   

Team level 

At the team level, the key is situational awareness identifying how failure and our actions impact those around us. The fundamental component for teams trying to fail well is psychological safety. In psychologically safe teams, members know they’re valued for what they bring to the team and feel safe to ask questions, report mistakes, and learn without being demeaned, ostracized, or criticized. Leaders build psychological safety by framing the work accurately, inviting participation, and responding productively.  

System level 

At the organization or system level, a flawed system will beat good people every time. The antidote is for organizations to implement strategies that foster a culture of failing well. High-reliability organizations, or HROs, are examples of systems set up to excel at failing well. HROs have common characteristics: they’re preoccupied with failure; reluctant to simplify; acutely sensitive to ongoing operations and quick to identify when something’s out of line; committed to resilience, growing stronger from lessons learned during failure; and they value expertise over rank.  

A nuclear power plant with an impeccable safety record is an excellent example of an HRO, and one a hospital can strive to emulate. Through system awareness, hospitals can foster a culture of failing well with initiatives such as blameless reporting, early catch processes, and failure celebrations. Hospitals can move the culture from “Who did it?” to “What happened?” and finally to “What can we learn?” 

For a deeper dive into what it means to fail well, listen to my full conversation with Edmondson on the “Critical Matters” podcast. 

 

Subscribe to the Sound Physicians Blog

A trusted source for today's healthcare needs.