The Future Is Now: Evaluation and Impact in the Next Two Decades
While there is still much we do not know about how to achieve, sustain, and maintain peace, it is clear that peace is necessary for a community to thrive. However, achieving peace is not as simple as increasing investment in peacebuilding initiatives. This is a necessary first step, but hardly sufficient. Building peace is a complex, long-term, multi-faceted endeavor that requires more reliable information to make better program design, investment, and policy decisions. Monitoring and evaluation (M & E) — when resourced adequately, practiced well, and integrated at every level of program design and implementation, decision-making, and policymaking — can prove what did or didn’t happen and help improve decisions and practice the next time around.
For instance, a collaborative effort among CARE International and other local and international NGOs, municipal governments, and international organizations was examined by CDA Collaborative Learning to determine why there was a marked lack of violence in certain communities during the widespread riots of 2004 in Kosovo. Contrary to what many thought would prevent violence in these interethnic communities, the research found that places with greater interethnic interaction did not experience less violence. In fact, prevention strategies that focused on promoting multi-ethnicity actually had the unintended consequence of intensifying divisions. Unexpected insights into the effects of peacebuilding programs such as these, highlight the critical importance of reflection through monitoring and evaluation.
However, evaluation practices and their ability to provide evidence in peacebuilding are still nascent. Over the past two decades, the peacebuilding community has made significant strides in its evaluation practice by making evaluation a requirement of program funding, developing technical guidance and tools to build evaluation capacity and developing structures and multi-sector partnerships that foster shared learning and increased methodological rigor. Now, 2015 has been designated the International Year of Evaluation by a variety of global organizations and agencies. With stakeholders from around the world highlighting the need for better evaluation, it will be a benchmark year that can be used to assess progress on measuring and learning into the future.
When I asked a variety of evaluation experts from around the world where they thought the field would be — should be — in its evaluation practice in another two decades, I was struck by the normality of their answers. Most saw the future of evaluation as simply a more consistent application of what we already know to be evaluation best practices in the present:
› Fully-integrated stakeholder and civil society input and involvement at every stage of the program design, evaluation, and learning process. Today, there is still a lack of real local representation in program design and evaluation. For instance, women’s participation is one of the key aspects of the peace process in Afghanistan. Female leaders from civil society organizations around the country are representing Afghani women in the peacebuilding process. As important as their role is, do these highly-educated and experienced women represent the average woman? If the answer is no, how then will a more truly representative group of women be integrated into programs and evaluation? Ideally, reflection and strong monitoring and evaluation would highlight programming gaps like this — and allow for changes to be made.
› More equitable relationships between external evaluators who come in to assess a program and internal staff roles. This trend is already beginning with the increased popularity of participatory methods like developmental evaluation and action evaluation. It is particularly useful in highly-dynamic conflict contexts, where rapid programming adaptation is needed, and in complex situations. Online access to evaluation trainings, networks, resources, mentorship, and guidance on how to use evaluation data to make decisions will become more readily available regardless of one’s location or funds, making exchanges among internal staff and external evaluators somewhat more even.
› Real-time, applied learning based on continuous feedback loops from ongoing programs. Evaluation tends to be practiced as a siloed activity at the end of a program. Ideally, technical steps like baselines and the systemic thinking about the complexity of the program and context, will be a given so that what is now called evaluation will become embedded in peacebuilding practice.
› Good evidence will not be defined by the use of a handful of expensive, quantitative evaluation methodologies. Methodologies that were not created with complexity in mind, for instance those that originated in clinical environments holding certain variables constant while other variables were tested, will not limit the definition and validity of evidence. Instead, a wide spectrum of more nuanced processes and methods that can be applied at various points in a program cycle will be incorporated.
› Evaluation findings will be used to adjust not just peacebuilding, but development and security programs, policies and approaches through calculated, stakeholder-led, and thoughtful risks. The current funding model of peacebuilding necessitates a degree of accountability. A donor has the right to know whether or not an organization used allocated funds as they said they would. More importantly though, local partners and communities need to know how programs and actions led by international actors have affected their lives and their future. As better evaluation efforts and analysis of those findings lead to better accountability with local partners, local communities will be able to have a higher degree of say in adjusting programs.
As complex as this list may sound, we can start to implement these evaluation trends and practices today. We know what needs to be done, and we know how to do it. But some challenges do exist to realizing this vision. Public and private funders operate under a number of pressures that create unrealistic expectations and timeframes for “successful” results and peace efforts. A scarcity of funds leads program staff to perceive monitoring and evaluation as a competitor, rather than an enabler, of program activities. At the same time, donor requirements reinforce the impression that M&E is externally imposed, rather than an integral part of good practice and learning. Consequently, many organizations fail to include M&E in personnel assessments. With neither incentives nor consequences for M&E, there is a low perceived need for learning and/or feedback. Finally, and perhaps most importantly, the overall system is not designed to be accountable to local needs and priorities. Programs are generally designed in response to donor requirements or in line with an organization’s favorite methodologies.
The future of peacebuilding evaluation — and accordingly, the legitimacy of peacebuilding practice — will be based on a shift of attitudes that welcomes calculated, evidence-based risks, funds good research, and encourages a redefinition — reconditioning of the consequences — of success and failure. As a new generation of peacebuilders, funders, and policymakers brings a comfort and familiarity with technology and an ever-increasing human interconnectedness gives people even more options to choose from, metrics will become even more important to help them gauge whether or not they made the right decision, and if they should make that choice again. Future success will depend on how well we, as a field, can face the day-to-day uncertainty in conflict zones and allow for the flexibility, responsiveness, and inclusion of local partners as leaders that are necessary for effective peacebuilding.
With input from: Robert Berg, Founding Director of Evaluation, the US Agency for International Development and the Founding Chair of Evaluation, OECD-DAC; Andrew Blum, Vice President for Program Management and Evaluation, the US Institute of Peace; Diana Chigas, Co-Director of the Reflecting on Peace Program, CDA Collaborative Learning; Vanessa Corlazolli, Senior DM&E Manager, Institutional Learning, Search for Common Ground; Asela Kalugampitiya, International Monitoring and Evaluation Specialist, UN Population Fund, Sri Lanka; Thania Paffenholz, Senior Researcher, the Graduate Institute’s Centre on Conflict, Peacebuilding and Development; Peter Woodrow, Executive Director, CDA Collaborative Learning.