Impact Evaluation Makes Good Sense!
In an interview with BW Businessworld, Dr. Emmanuel Jimenez who recently moved to 3ie after 30 years at the World Bank, talks about the role of impact evaluation in assisting policy makers
Dr. Emmanuel (Manny) Jimenez is Executive Director of the International Initiative on Impact Evaluation (3ie). He came to 3ie early in 2015 after many years at the World Bank Group where he provided technical expertise and strategic leadership in a number of research and operational positions including as director of the bank’s operational program in human development in its Asia regions from 2000-2012. Before joining the bank, Dr Jimenez was on the economics faculty at the University of Western Ontario in London, Canada. He received his Ph.D. from Brown University.
In an interview with BW Businessworld, Dr. Emmanuel Jimenez who recently took over as head of 3ie to head it talks about the role of impact evaluation in assisting policy makers.
NM: You recently moved to India after 30 years at the World Bank to head 3ie. Why the move to 3ie?
EJ: After three decades in one institution, I wanted a change and preferably one that could leverage my experience at the World Bank. 3ie seemed like a great fit. It's an international non-profit organization founded 8 years ago to advocate and support rigorous evaluations of the effectiveness of development projects and programs. 3ie has its largest office in New Delhi, since its work focuses on emerging economies.
NM: How would you characterise the relationship between impact evaluation and policy and decision making?
EJ: In my experience, decisions about how to design and implement policies that are meant to improve the lives of the poor do a better job if they are informed by evidence on what works, for whom and why. Impact evaluations show whether the desired outcomes were achieved and they also measure the effect caused by the intervention. Without such evidence, decisions are often based on incomplete information or even just anecdotes. This is risky since untold amounts of scarce resources could be poured into programs that may not be working at all. It's critical to know if these programs need to be scaled up if they are working or need to be adapted, or perhaps even dropped, if they are not working.
NM: Can you provide us with a real world example?
EJ: We all know that air pollution is a huge problem in India. Emissions from industrial plants are one source of the problem which is why they are regulated. Regulators in states such as Gujarat rely on data from third party auditors hired and paid directly by the firms themselves. To check whether or not the potential conflict of interest in this system affected reporting, researchers from J-PAL, working with the Gujarat Pollution Control Board (GPCB), compared the status quo with an alternative system - one in which the auditors were randomly assigned to, rather than chosen by, firms which financed the audits from a pooled fund. Results indicated that the alternate system greatly increased the accuracy of auditor reporting. Moreover, the plants in the alternate reporting scheme reduced their pollution emissions compared to those in the status quo. The GPCB then reconsidered the way that it monitors emissions. According to the researchers, as of 2015, new guidelines require that auditors be randomly assigned to plants and have their work checked for accuracy.
NM: Are governments open to having their programmes evaluated?
EJ: As the example in Gujarat showed, many governments are. But this is far from universal. All of us are at least a little bit uncomfortable with being evaluated and getting feedback. Governments are just the same, if not more so, especially if the evaluation is one of a programme that is already scaled up or of a policy that is widely used. Much time, effort and money would already have been invested and it would some amount of bureaucratic courage to be open to such an evaluation. Moreover, decision-makers are often under pressure to make choices in less time than it takes for evaluation results to be available. We are aware of these concerns in and in many cases, our teams are provided a long 'preparatory period' to ensure this engagement and consequently ownership.
NM: There is commonly a negative association with evaluation amongst organisations and bureaucrats - What needs to change?
EJ: There can be apprehensions regarding evaluation when it is used primarily as an accountability tool that punishes programme implementers rather than supporting them to improve. Evaluations are better received if they are seen primarily as a learning tool. If you shift the core questions from 'Did your program work' to 'what are the ways to improve your programme?' then people, organisations and bureaucrats feel less threatened by evaluation may even see it as way to come up with a better product. We are now focusing our impact evaluation tools towards answering the 'how' question i.e. understanding how programmes should be implemented.
The communication between evaluators with implementers or decision-makers also needs to change. Evaluators need to be able to communicate scientific evidence in lay terms. They also need to engage with government and in fact all key stakeholders early in the evaluation and get their inputs on whether we are asking the right questions and getting useful results that can be used.
NM: What is the right mix of evidence and political will for policy shifts?
EJ: Let me give you the example of Mexico that wanted to replace an inefficient food (tortilla or flatbread) subsidy programme with one that directly transferred cash to poor families in exchange for their investing in the education and health of their children. These conditional cash transfers were innovative at the time and the government embedded a rigorous impact evaluation to measure the effects on children's education and health status as well as the welfare of the families. The evaluation, which showed positive results, is considered to be a critical factor in scaling up the programme and allowing it survive changes in government. (These governments kept the programme but changed its name - it started as PROGRESA, became OPORTUNIDADES and is now PROSPERA). Evidence is powerful tool in ensuring smooth transitions for good policies when governments change. The scheme also played an important role in getting other countries in the region and eventually around the world to adopt similar strategies in combating nutrition such as Brazil and the Bolsa Familia scheme.
NM: What can we expect from 3ie in the coming months?
EJ: 3ie has been supported 34 impact evaluations in India. In the coming months, more evaluations in the areas of agricultural risk insurance and sanitation will be conducted. We are also active in advocating for the use of evidence in decision-making here and we invite those interested to visit our website and to come to various events, such as Delhi Evidence Week being held this week and our monthly Delhi Seminar Series.