In the movie “The Big Kahuna,” the character played by Danny DeVito tells a young salesman that he does not have any character, “for the simple reason that you do not regret anything yet.” “Are you saying,” the young man asks, “that I won’t have character unless I do something that I regret?” “No, Bob,” DeVito answers, “I’m saying that you’ve already done plenty of things to regret. You just don’t know what they are.”
In the world of development many, if not most, of the lessons we have learned come from success stories. What about learning from attempts that have failed to deliver? If the old adage that “failures are stepping stones to success” is true, surely there are just as many failures out there with valuable lessons for us. Why do we not have a compendium of failures—the well-intentioned ideas that did not quite live up to expectations?
This question would be irrelevant if it were possible to move ahead without trying anything new or exciting. As an applied microeconomist, I am encouraged that the global community of governments, donors, and scholars is actually doing exactly the opposite. Innovative work is being done all over the world. But by definition, the more innovations we attempt, the more failures we should expect to see—even accounting for the fact that only ideas that succeed in making it past a funding review are implemented.
There are many more failures out there than are reported publicly, and each is a lost opportunity to learn. Why are we missing these chances? From my recent experience, I believe it is because of two sets of reasons.
A good idea that failed to deliver
A paper that I co-authored (published in Health Affairs in October 2016) reports the results from the evaluation of a large program using telemedicine and social franchising models to deliver health care in rural Bihar, India. Despite the rapid growth of social franchising, there is little evidence on its impact at the population level. Similar in many ways to commercial franchising, social franchising can be found in sectors such as public health. We studied the World Health Partners’ SkyHealth program that aimed to reduce childhood diarrhea, pneumonia, and associated outcomes. World Health Partners had already won many global social entrepreneurship awards for this project. The Bill and Melinda Gates Foundation funded SkyHealth as part of a major health sector initiative
Who are the 5 million refugees and immigrants in Egypt?
My co-authors and I collected data to evaluate the program over a four-year period, also funded by the Gates Foundation. We found that the program failed to improve any of its target outcomes—rates of appropriate treatment or disease prevalence—despite the (premature) global recognition.
The fear of negative publicity
In the media attention that followed publication of our paper, we were asked many questions. Hadn’t the Gates Foundation seen this coming? Why had they wasted millions? We thought these were the wrong questions to ask. The right question is “Why is it that other international development agencies spend as many or more millions without robust independent evaluations that accompany such investments?” As champions for evidence based policy, it was encouraging for us to see that the Gates Foundation had been transparent about the study’s results. Despite the risk of potentially damaging media coverage, they had supported our efforts to present and publish our findings.
The experience made me realize how exceptional this was. The fear of negative publicity is one likely reason that we do not see donor agencies publicizing attempts that failed to deliver results—i.e., the things they regret.
Our project is one of the few instances when the Gates Foundation had funded a large intervention while concurrently funding a large independent rigorous evaluation to estimate the impact of the intervention. This was an ideal evidence-based-policy setting. We learned that while the central ideas behind telemedicine or franchising had merit, their failure could be attributed in large part to poor implementation and plans based on untested assumptions about the demand for services delivered by the program.
Ideally, a “promising idea” would be one that has shown clear results on efficacy before it is being implemented at scale and has been tested for effectiveness, but this is not always possible. The model we’d want to see replicated widely in development is one where donors:
Bear the financial and reputational risk of investing in promising ideas implemented at a reasonably large scale;
Ensure that the key underlying assumptions are tested along the way, and;
Support robust independent evaluations of the project. For such a model to become the norm, we need an environment where development institutions are not seen as having “failed” for taking a chance. They should be seen as failures if they don’t regret anything that they have done.
The publication incentive
There are academic reasons as well that could explain why we do not hear about failures as often. It is tough to publish, in high profile journals, negative or null results from an intervention. Publication biases occur when editors, reviewers, researchers, and policymakers have strong priors about the impact of interventions and lean in favor of statistically significant results that confirm these priors. Priors, by definition, are not as well formed as assumptions, but they can have the same impact on biases. At least in the field of global health, in which I work, broadly held priors and the biases they help perpetuate are pervasive. Interventions are often celebrated for their promise rather than their actual impact on the lives of people. Evidence of null or negative impact is like bringing a wet blanket to the celebration. But that kind of honesty is exactly what we need.
For an excellent example of efforts to learn from both successes and failures in global health, I recommend Millions Saved by Amanda Glassman and Miriam Temin at the Center for Global Development.
As we await more lessons from experience, we need to be careful not to create disincentives for governments, development agencies and donors to evaluate their investments and tell others what worked and what didn’t.