You are here

How Not to Learn What Works: The Scourge of Results Frameworks

 

Author: Howard White, Director, Evaluation and Evidence Synthesis, Global Development Network

The World Bank has announced a new set of indicators to measure what they are doing in reducing poverty and so on. But this is the work of the results movement which promotes the use of outcome monitoring to measure impact. As I argued in my blog - the unhappy marriage of impact evaluation and the results agenda - over a decade ago, so-called “results” cannot be used for this purpose. Outcome monitoring does not tell us what difference an intervention makes. Only impact evaluation can do that.

In fact, unhappy marriage was an over statement. The results movement has made no attempt at all to become engaged with impact evaluation, let alone arrange a marriage, happy or otherwise. They are not even dating.

At an OED (now IEG) conference in 2003 I proposed the triple-A criteria for agency-wide performance measurement systems (conference proceedings here, open access version of my paper here).  These are (1) aggregation - can the numbers be added up across projects? (2)  alignment - do the indicators align with the overall goals objectives of the organisation?  and (3) attribution - can what the system is measuring be attributed to the actions of the organisation? With the widespread adoption of the SDGs and similar goals most systems do well on alignment. With the widespread use of KPIs in results frameworks most systems do well on aggregation. But they fall flat on attribution.

As I've presented elsewhere, the agencies which first adopted this approach to “results measurement” such as USAID and DFID have long abandoned it in favour of impact evaluation. Oxfam UK spent two years preparing a comprehensive results framework, but then abandoned that in favour of a programme of impact evaluation IFAD has gone the furthest, using a representative sample of impact evaluations to assess effect of its programme. Many of these initiatives were described in a special issue of the Journal of Development Effectiveness.

The widespread use of results frameworks has become a substantial problem. In many agencies monitoring and evaluation staff spend their time on collecting outcome level data which are essentially useless for management purposes. Substantial staff time of highly-paid World Bank staff must have gone to developing the new World Bank results system. The opportunity cost of that time is large. It could have been spent better elsewhere.

But the real cost is that project managers, the Board of the World Bank, and member countries are lulled into thinking that we are measuring what difference we are making by a system that is telling us nothing of the sort. As about as I've argued here, monitoring needs to be rescued from the results agenda, and return to its proper role of monitoring inputs, activities and possibly outputs to provide information that management can actually use. And a more systematic use needs to be made of impact evaluations to assess agency performance and learn what works.

Thankfully many agencies are turning to systematic reviews to assess the effects of programmes. They often fall short of the requirements of full systematic review à la the Campbell Collaboration by using for example critical appraisal or and proper statistical meta-analysis. And there is no good reason to restrict these reviews to their own projects. Nonetheless, the findings from such reviews are a far more accurate guide to ‘how we are doing’ than outcome monitoring will ever be.