“Failure” is a scary-sounding word, because it’s a negative outcome. When we use it in everyday conversation there’s an inherent judgement baked in:
“They are a failure as a parent.”
“I failed my math test.”
“Don’t fail me.”
“You missed the bus? FAIL!”
Usually when we talk about failure, it’s in the context of “How do we avoid this terrible thing?” Colloquially, this is fine, but when you’re building a product, it’s a problem. If you’re working on an entirely new app or just a new feature, there are so many ways that things can (and will) fail along the way to building something successful. And those failures aren’t negative, they’re useful data.
User testing is a good example of useful failure. Say you’re reworking an educational website’s global navigation. The goal for the update is to make it easier for visitors to find course materials to download. You and your team have worked hard on a solution, and it’s time to put it in front of testers. Things kick off, and right from the start it’s clear that your testers can’t find that portion of the site. Epic fail!
But is it, really?
A common knee-jerk reaction is to stop testing and go back to the drawing board. Obviously what you proposed isn’t solving the problem, so why not get back to working on a new solution right away? Stakeholders might even directly request that you start over–but that would be a mistake.
In digital product development, we test solutions to gather as much data as we can, so we understand not just that the solution isn’t successful, but why. Without this information, your team will be ill-equipped to re-address the problem when you do go back to the whiteboard. Failure doesn’t mean “you failed.” It means you are one step closer to success. In this way, failure is simply a set of data that helps you make decisions down the line.
The Scientific Approach
In order to set up your team to treat failure like data, start looking at it through the lens a scientist would use.
In many ways, the testing scenario we described is no different than a scientist formulating a hypothesis, running an experiment, and viewing the results. In our example, the outcome was not in line with the hypothesis. If this were to happen to a scientist, the reaction would be that they are doing their job well, as long as they capture the data about why the hypothesis was wrong. It’s not out of the ordinary, it’s expected and necessary.
The key is framing a failure as an informative versus negative outcome. Then, you will need a game plan for how your individual work will be affected by that information. Specifically, you’ll need a process to gather the data, and a plan for how you intend to turn around a new iteration.
Ideally, you communicate this process and plan at the start of a project or phase–to your stakeholders and the team. It will keep everyone aligned on the project goals and the problems to solve. This will also help prevent the team’s morale from taking a hit if an effort does fail.
If you find yourself in the middle of a project without this shared understanding, consider carving out an opportunity to introduce the concept. It will be even more important to have a game plan to deal with failure in this scenario, since you are mid-flight.
A final thought: Failure is useful when you are able to react to it and make changes. This should be a consideration when you are working on your deliverable timeline. Don’t make the mistake of introducing the concept of failure as data without enough time to be able to iterate.