The optimal rate of failure

Published:

Summary: We sometimes assume that seeing someone fail implies that they are doing something wrong, but I argue that the ideal rate at which our plans should fail is often quite high. I note that this has consequences in politics and ethics that are often underappreciated.

It was apparently George Stigler that said “If you never miss a plane, you’re spending too much time at the airport.” The broader lesson is that if you find you’re never failing, there’s a good chance you’re being too risk averse, and being too risk averse is costly. Although people have discussed this principle in other contexts (e.g. in learning and startup investing), I still think that this lesson is generally underappreciated. For anything we try to do, the optimal rate of failure often isn’t zero: in fact, sometimes it’s very, very far from zero.

To give a different example, I was having an argument with a friend about whether some new social policy should be implemented. They presented some evidence that the policy wouldn’t be successful and argued that it therefore shouldn’t be implemented. I pointed out we didn’t need to show that the policy would be successful, we just needed to show that the expected cost of implementing it was lower than the expected value we’d get back both in social value and—more importantly—information value. Since the policy in question hadn’t tried before, wasn’t expensive to implement, and was unlikely to be actively harmful, the fact that it would likely be a failure wasn’t, by itself, a convincing argument against implementing it. (It looks like a similar argument is given in p. 236-7 of this book.)

This is why I often find myself saying things like “I think this has about a 90% chance of failure—we should totally do it!” (Also, there’s a reason why I’m not a startup founder or motivational speaker.)

The expected value of trying anything is just the sum of (i) the expected gains if it’s successful, (ii) the expected losses if it fails, and (iii) the expected cost of trying. This includes direct value (some benefit or loss to you or the world), option value (being in a better or worse position in the future) and information value (having more or less knowledge for future decisions).

The optimal rate of failure indicates how often you should expect to fail if you’re taking the right number of chances. So we can use our estimates of (i), (ii), and (iii) to work out what the optimal rate of failure for a course of action is, given the options available to us. The optimal rate of failure will be lower whenever trying is costly (e.g. trying it takes years and cheaper options aren’t available), failure is really bad (e.g. it carries a high risk of death), and the gains from succeeding are low. And the optimal rate of failure will be higher whenever trying is cheap (e.g. you enjoy doing it), the cost of failure is low, and the gains from succeeding are high.

If the optimal rate of failure of the best course of action is high, it may be a good thing to see a lot of failure (even though the course of action is best in spite of, rather than because of, its high rate of failure). I think we’re often able to internalize this: we recognize that someone has to play a lot of terrible music before they become a great musician, for example. But we’re not always good at internalizing the other side of this coin: if you never see someone fail, there’s a good chance that they’re doing something very wrong. If someone wants to be a good musician, it’s better to see them failing than to never hear them play.

So far, this probably reads like a life or business advice article (“don’t just promote people who succeed, or you’ll promote people who never take chances!“). But I actually think that failing to reflect on the optimal rate of failure can have some pretty significant ethical consequences too.

Politics is a domain in which things can go awry if we don’t stop to think about optimal rates of failure. Politicians have a strong personal incentive to not have the responsibility of failure pinned directly on them. We can see why if we consider the way that George H.W. Bush used the case of Willie Horton against Michael Dukakis in the 1988 presidential campaign. If a Massachusetts furlough program had not existed, Bush couldn’t have pointed to this case in his campaign. Not having any furlough program may be quite costly to many prisoners and their families, but “Dukakis didn’t support a more liberal furlough program” is unlikely to show up on many campaign ads. Now I don’t know if the Massachusetts furlough program was a good idea or not, but if politicians are held responsible for the costs of trying and failing but not for the costs of not trying, we should expect the public to pay the price of their risk aversion. (More generally, if we never see someone fail, we should probably pay more attention to whether it is them or someone else that bears the costs of their risk aversion.)

I think this entails some things that are pretty counterintuitive. For example, if you see crimes being committed in a society, you might think this is necessarily a bad sign. But if you were to find yourself in a society with no crime, it’s not very likely that you’ve stumbled into a peaceful utopia: it’s more likely that you’ve stumbled into an authoritarian police state. Given the costs that are involved in getting crime down to zero—e.g. locking away every person for every minor infraction—the optimal amount of crime we should expect to see in a well-functioning society is greater than zero. To put it another way: just as seeing too much crime is a bad sign for your society, so is seeing too little.

We can accept that seeing too little crime can be a bad sign even if we believe that every instance of crime is undesirable and that, all else being equal, it would be better for us to have no crime than for us to have any crime at all. We can accept both things because “all else being equal” really means “if we hold fixed the costs in both scenarios”. But if you hold fixed the costs of eliminating a bad thing then it is, of course, better to have less of it than more.

One objection that’s worth addressing here is this: can’t we point to the optimal rate of failure to claim that we were warranted in taking almost any action that later fails? I think that this is a real worry. To mitigate it somewhat, we should try to make concrete predictions about optimal rates of failure of our plans in advance, to argue why a plan is justified even if it has a high optimal rate of failure, and to later assess whether the actual rate of failure was in line with the predicted one. This doesn’t totally eliminate the concern, but it helps.

I first started thinking about optimal rates of failure relation to issues in effective altruism. The first question I had is: what is the optimal rate of failure for effective interventions? It seems like it might actually be quite high because, among other things, people are more likely to under-invest in domains with a high risk of failure, because of risk aversion or loss aversion or whatever else. I still think this is true, but I also think that in recent years there has been a general shift towards greater exploration over exploitation when it comes to effective interventions.

The second question I had is: what is the optimal rate of failure for individuals who want to have a positive impact and the plans they are pursuing? Again, I think the optimal rate of failure might be relatively high here, and for similar reasons. But this raises the following problem: taking risks is something a lot of people cannot afford to do. The optimal rate of failure for someone’s plans depends a lot on the cost of failure. If failure is less costly for someone, they are more free to pursue things that have a greater expected payoff but a higher likelihood of failure. Since people without a safety net can’t afford to weather large failures, they’re less free to embark on risky courses of action. And if these less risky courses of action produce less value for themselves and for others, this is a pretty big loss to the world.

To put it another way: if you’re able to behave in a way that’s less sensitive to risks, you’re probably either pretty irrational or pretty privileged. Since many of the people who could do the most good are not that irrational and not that privileged, enabling them to choose a more risk neutral course of action might itself be a valuable cause area. Investing in individuals or providing insurance against failure for those pursuing ethical careers would enable more people to take the kinds of risks that are necessary to do the most good.

Leave a Comment