OEF and Alliance for Peacebuilding released a report last month detailing the findings of a survey of professional peacebuilders on their ideas about evidence-based practice in peacebuilding. The results suggest that despite the debate, the field has a pretty developed understanding of what evidence is needed, where it exists, and where it’s lacking.
The idea of evidence-based work is important to OEF: one of our core values is a commitment to be “relentlessly empirical” in the design and assessment of our strategies and impact in peacebuilding. In this, we’re part of a larger discussion around evidence-based practice in peacebuilding that can sometimes be contentious. In an ongoing effort to better understand the field, OEF and Alliance for Peacebuilding released a report last month detailing the findings of a survey of professional peacebuilders on their ideas about evidence-based practice in peacebuilding. The results suggest that despite the debate, the field as a whole has a pretty developed understanding of what evidence is needed, where it exists, and where it’s lacking.
The basic idea of evidence-based practice isn’t exciting - it’s just the idea of learning what works and doing that. In some form, it’s been the basis of human society since the foundation of civilization. Humans used willow bark to treat pain for thousands of years before its key compounds were identified and aspirin was synthesized, and for almost a hundred years after that until its mechanism was understood. The tension, of course, comes in when a field as a whole debates the idea of evidence-based practice because that will by its nature create winners and losers. The details of what kinds of evidence count, and who gets to make those decisions, become important and they become political. This is particularly true once issues of resources come into play, and in the peacebuilding field the recent (and largely positive) emphasis on improved monitoring, evaluation, and learning that funders have requested means that issues of resources are directly in play.
Because of that, it’s perhaps not surprising that the tenor of the discussion around evidence can at some times be a little tense, something exacerbated by the rise in the last 10-15 years of the “randomistas,” or proponents of the use of experimental methods (or “Randomized Controlled Trials”) as the pre-eminent type of evidence supporting evidence-based practice. This approach turns the idea of evidence into a methods debate, an argument about essentially the philosophy of knowledge but one that has real monetary impact. If a funder requires a specific form of evidence, then programs without that evidence will struggle to get funding. This raises the stakes of a discussion where there’s already fairly strong (and opposing perspectives). As such, it’s perhaps not surprising to see arguments around evidence-based practice in peacebuilding take fairly strong and argumentative positions - from the founding Executive Director of the International Initiative for Impact Evaluation and RCT proponent arguing that “Most interventions don’t work, most interventions aren’t evaluated and most evaluations are not used. As a result, billions of dollars of money from governments and individual donations is wasted on ineffective programmes” to arguments that “the limitations of randomista economics have given rise to a particular way of thinking characterized by piecemeal analysis, ad hoc resort to theory, indifference to history and context, and methodological fundamentalism.”
Against that backdrop, we went into our research expecting to find passionate disagreement between different sectors of the peacebuilding field around evidence-based practice, and we largely didn’t. Instead, we found significant consensus. The respondents to our survey largely valued the idea of evidence-based practice in the abstract, and in practice they defined the idea of evidence not by any one specific method (in the most part), but instead by multiple sources of evidence using multiple methods. In fact, about twenty percent of the responses rejected the idea that methods-as-methods mattered much at all, and instead argued for the need for quality evidence more broadly: evidence that appeared to grapple effectively with issues of bias and the various sources of misanalysis and misinformation that could easily crop up in assessments and research.
Using that definition, participants also felt that the field had a good understanding of what peace looks like: they felt that there was a good amount of evidence describing, in general, the conditions necessary for sustainable peace. In contrast, only three of the conditions asked about had evidence suggesting we know how to deliver them. That is, the field may know what peace looks like - but how to get there from here is, for the most part, something where more research is needed. It’s significant, perhaps, that two of the three specific items where people thought evidence existed were part of the Women, Peace, and Security Agenda - increasing women’s engagement in economic and political life broadly, and in peacemaking specifically. There is an argument that there is no conceptual area where the gap between what the world knows needs to be done (and how to do it) and where we are as a global society is quite as large as it is with the WPS agenda specifically, and this survey does support that argument.
We went into this survey expecting to highlight differences within the field for deeper discussion. Instead, we found that the field actually has fairly strong agreement over core issues. We care about evidence, and we want good evidence of all kinds to inform our work. That finding provides, for us, some optimism that we have a strong foundation to build from as a field.
Read more about the survey and its findings here.
Article Details
Published
Program
Content Type
Opinion & Insights