One of my favorite movie quotes is, “If you don’t have anything nice to say, come sit by me” from Steel Magnolias. It seems to aptly apply to the work I’ve been doing over the last 4-plus years. I founded Improve International out of my frustration with sustainability problems. Partly out of my curiosity about the magnitude of the problem, I began collecting information about the “whats” and “hows” of failures in the water and sanitation world. I was unsure how making these public would be received by my peers, but it turns out these failure pages are our website’s most visited.
Fortunately, I am not alone in talking publicly about failures in our sector. In December 2015, I participated in an entertaining event called FailFest. I was nervous because the organizer encouraged the speakers to “be creative.” I’m glad that instead of my original idea to do a presentation a la Jimmy Fallon’s “History of Rap,” I went with a nerdier approach. Tongue firmly in cheek, I talked about how the HBO show Game of Thrones reminds me of lessons not learned in international development. I think about 25% of the audience had seen the show; the rest just had to wonder about the jokes referencing dragons and murderous weddings. Several other dynamic speakers presented their failures in international development. One brave soul even sang a song! While it was refreshing to talk about and listen to the realistic challenges that face our sector, I left wanting more. I wanted to be able to say that I had gone back and done something about my failures – not just learned something and moved on. I wanted to hear examples of how others had done it, too.
There is much needed movement towards making monitoring, evaluation and learning standard components in program design, but in many cases they are still removed from the reality of program implementation. In the past few months I’ve been reading about a lot of water and sanitation “lessons learned” in documents from the 1970s to 2015. It is demoralizing to recognize that we are “learning” the same lessons over and over again. For example, in 1980, Feachem wrote, “users will have sound, and sometimes strong, opinions about how the new facility should be designed. This design resource can only be tapped by allowing community participation in the design process.”(1) Thirty-four years later, in 2014, WaterAid found “A low-quality toilet is an embarrassment for the family…people have a strong desire for an ‘ideal’ water-based toilet.” This is not intended to pick on WaterAid: several other organizations have been re-learning similar lessons.
Why do the same “lessons” keep popping up in evaluations? I think it’s because addressing evaluation findings isn’t built into the process of implementation. The incentives in international development seem to be more focused on doing than on learning from what we did right or wrong.
What does it take to move from learning to changing? I recently attended a talk by Paul Brest, the former president of the Hewlett Foundation. My big takeaway was that you should only assess progress if you are going to act on it and if you are open to changing your organization’s behavior. Marilyn Darling, who created the field of emergent learning, echoes this: “Learning from failure requires the difficult task of changing deeply rooted habits of thinking, decision-making, and interacting.”(2) In a recent call with Darling, she said, “Evaluation should be the beginning of the sentence, not the end.” Evaluations often just mark the end point of a program or project. Most do not have a built-in next step to use the lessons learned to modify current programs. They should. And anyone reviewing an evaluation should think about what is on their plates right now that would be affected by this way of thinking.
We don’t need to wait to learn from our own mistakes. Not only can we learn from the many lessons learned and shared by other implementers and researchers (if you can’t find any, come sit by me, or email me), but we should also use this information to predict — and thus prevent — what will go wrong. “[I]magining that an event has already occurred increases the ability to correctly identify reasons for future outcomes by 30%.”(3) Perhaps all programs should start with a “pre-mortem,” which Klein described in the Harvard Business Review: A premortem is the hypothetical opposite of a postmortem.
“A postmortem in a medical setting allows health professionals and the family to learn what caused a patient’s death. Everyone benefits except, of course, the patient. A premortem in a business setting comes at the beginning of a project rather than the end, so that the project can be improved rather than autopsied. Unlike a typical critiquing session, in which project team members are asked what might go wrong, the premortem operates on the assumption that the “patient” has died, and so asks what did go wrong. The team members’ task is to generate plausible reasons for the project’s failure.”(4)
In my FailFest talk, I said that I believe strongly that learning about problems through monitoring and evaluation should not just inform changes in future programming, but also lead to resolution of those problems. A nice example of this is ACF’s learning review, which shares problems they found during monitoring and evaluation AND what they did about them, rather than just detailing “lessons learned.” For example, an evaluation confirmed substandard construction work that led to non-functional hand pumps. Interviews with project committee members and beneficiaries identified staff negligence in supervision of contractors as a root cause. Not only did ACF reconstruct faulty hardware and infrastructure where necessary, but they also released some staff members. Regular physical verification of hardware at several stages of program implementation has now been integrated into ACF’s standard process. For organizations that are ready to address problems they have found through evaluation, we’ve developed some guidelines for resolution of problems with water systems, many of which apply to other international development activities. Has your organization done a pre-mortem exercise? Have you resolved problems found through evaluation? Let us know.
- Feachem, R. G., 1980. Community Participation in Appropriate Water Supply and Sanitation Technologies: The Mythology for the Decade. The Royal Society, Vol. 209, No. 1174, More Technologies for Rural Health (Jul. 28, 1980), pp. 15-29. Available at: http://www.jstor.org/stable/35337
- Darling, M.J. and Smith, J.S., 2011. “Lessons (Not Yet) Learned,” The Foundation Review: Vol. 3: Iss. 1, Article 9. Available at: http://scholarworks.gvsu.edu/tfr/vol3/iss1/9
- Research conducted in 1989 by Deborah J. Mitchell, of the Wharton School; Jay Russo, of Cornell; and Nancy Pennington, of the University of Colorado mentioned in https://hbr.org/2007/09/performing-a-project-premortem
- Klein, Gary, 2007. “Performing a Project Premortem,” Harvard Business Review (September 2007). Available at: https://hbr.org/2007/09/performing-a-project-premortem