- About IEO
- NEC Conference
- Media Centre
- Evaluation Resource Centre
Cet atelier sera présenté en français
Cet atelier de formation aborde les différentes conceptualisations de l’utilisation de l’évaluation, le processus d’élaboration des politiques publiques de développement et les facteurs en jeu pour qu’une évaluation gagne en influence. Les participants reçoivent des lignes directrices précises et systématiques sur la façon de prendre en compte les obstacles et faciliter l’utilisation d’évaluation au cours de la phase de planification des évaluations. Les participants apprendront aussi comment développer un plan d’influence politique explicite pour toutes les évaluations menées afin d’augmenter la probabilité d’utilisation de ces évaluations par le public-cible, en particulier les décideurs au sein du gouvernement.
Multilateral development banks (MDBs) undertake interventions in developing countries through both the private and public sectors. MDB support to the public sector is still dominant, although private sector interventions have shown a steep growth over recent years. While public sector operations are often initiated by the MDBs in cooperation with national or local governments, private sector interventions involve corporate sponsors who control their project initiatives. The relationship of sponsors with the MDB is often long-term as the current client-oriented model and strategic intent of the two private sector-specialized institutions among the MDBs, the International Finance Corporation (IFC) as well as the European Bank for Reconstruction and Development (EBRD), indicate. The financial instruments to support the development of the private sector are mostly of a short to medium term nature. This workshop discusses the specificity and dynamics of private sector evaluation, thereby highlighting the methodological approaches and evaluation practices that are used by MDBs for this type of operations at the institutional and project levels.
The effectiveness of this universe of private sector interventions should not be judged by their financial return only. On the one hand, investment operations certainly entail a profitability angle, but the rationale for participation of the public sector in supporting them is rather based on their broader social returns. In other words, institutions intervening in this space do so with two sorts of bottom lines in mind: (i) financial and (ii) economic/social/environmental. For a view on the first, the market may suffice, for the combined effect, evaluation is indispensable.
The workshop will start with a presentation on methodologies used in private sector evaluation and compare them with public sector evaluation methodologies and practices. In this respect, more than 20 years of experience in developing good practice standards in the Evaluation Cooperation Group (ECG) will be presented. To make the workshop as interactive as possible, two case studies will be presented. The main themes of the case studies will be the evaluation of financial intermediaries and evaluating direct equity investments. Finally, as learning from experience is one of the main features of development evaluation, the workshop will discuss adaptive learning as an institutional strategy.
Interventions are theories and evaluation is the test. This well-known reference is indicative of an influential school of thought and practice in evaluation, often called theory-driven or theory-based evaluation. While having been around for more than four decades, over the last decade theory-based evaluation has received new impetus and has become part and parcel of the toolkit of program evaluators across the globe.
The past decade has also seen a dramatic increase in impact evaluation debates and practices. While theory-based evaluation has often been cast as an alternative to quantitative counterfactual-based impact evaluation, in practice the two can reinforce each other. At the same time, the scope for applying different expressions of theory-based evaluations is much broader than impact evaluation only. The workshop will address the following main themes:
1. What is theory-based evaluation and why is it important?
2. What are useful principles for reconstructing a program theory? 3. How can we apply theory-based evaluation in practice?
After this course, participants have developed an initital (but sound) understanding of the role of theory in evaluation and how to apply theory-based evaluation in practice.
- Short interactive lectures
- Group exercise and presentations on the basis of an empirical case
Course level: Beginning/intermediate
Este taller tendrá lugar en español.
El enfoque de género ha estado presente al más alto nivel desde el momento de la declaración de los países sobre los ODS. Desde entonces se han realizado esfuerzos por definir indicadores y metas en esa dirección, sin embargo, estos esfuerzos son todavía insuficientes. Este taller pretende intercambiar reflexiones, prácticas y una lectura crítica sobre los indicadores y metodologías que podrían contribuir a un uso más intensivo del enfoque de género en los procesos de evaluación y seguimiento de los ODS en los próximos años.
This workshop is a simulation of running an evaluation unit that supports regional development programs. Participants can learn by experience different ways of designing credible studies and effective strategies of disseminating results to different policy audience.
This workshop is designed for seasoned development evaluators and government officials responsible for commissioning and supervising evaluations.
Workshop content: Using exercises with a simulated case study, participants spend 60% of the workshop doing exercises in small groups as a means of working through the Outcome Harvesting steps:
1. Design the Outcome Harvest with key evaluative questions based on the principal uses of the primary users of the process and findings.
2. Review documentary material to draft potential outcome descriptions of who changed their behavior, how the intervention contributed and other relevant data such as the significance of the outcome.
3. Engage with human sources of information to complete outcome descriptions and formulate additional ones
4. Substantiate the veracity of select outcome descriptions with independent third parties and deepen understanding.
5. Analyze and interpret the outcome information to answer the evaluation questions. 6. Support the use of the Outcome Harvest findings.
Many evaluators highlight challenges in evaluation use. Improving evaluation utilization is a task that is often far larger than the individual evaluator, it is a systems issue. It is important for evaluation commissioners and managers to move towards systematic evaluation practice that meets the demands of the organization and increase utilization of evaluation findings.
Evaluation system diagnostics are helpful to (re)align evaluation practice with demands from across different political and organizational levels and thereby contribute to improving the utilization and relevance of evaluations. This workshop presents a set of diagnostic tools and concepts that seek to identify different demands for evaluation.
This workshop introduces a diagnostic toolset that has been developed through eleven analytical processes of government evaluation systems in Africa, application in international NGOs and in evaluation systems development in the UK Government. These tools are designed to answer three questions:
(i) What is value of evaluation in the context?
(ii) What demands emerge from different stakeholders?
(iii) What is the relevance of the current evaluation systems in responding to these demands?
Answers to these questions are of high importance for the development of evaluation systems as they inform refinement of the political and technical aspects of the evaluation system, namely: policy, incentives, procurement, competencies and quality assurance mechanisms.
The workshop presents a range of tools and concepts that can identify new opportunities for evaluation. In considering the three questions above, participants will engage with:
What should theories of change look like when programmes are addressing climate change mitigation and adaptation? Why are they important? What steps should one take to inform them? How can climate change and environment programmes start planning for good evidence right from the beginning? Participants will learn some interesting ways in which designs of programmes may be used in creative ways to help evaluations.
More details coming soon!
This workshop will focus on a range of new technological tools and examine how they can be used to improve applied research and program evaluations. Specifically, we will explore the application of free or inexpensive software to engage clients and a range of stakeholders, collect research and evaluation data, formulate and prioritize research and evaluation questions, express and assess logic models and theories of change, track program implementation, provide continuous improvement feedback, determine program outcomes/impact, and to present data and findings. Participants will be given information on how to access tools such as Crowdsourcing, Geographical Information Systems (GIS), data visualization, and interactive conceptual framing software to improve the quality of their applied research and evaluation projects.
The Agenda 2030 for Sustainable Development puts forward “a plan for action for people, planet and prosperity” and “seeks to strengthen universal peace in larger freedom” through strategic partnerships. It includes a vision and principles, a results framework of global SDGs, a framework for means of implementation and follow-up and review mechanism.
This means evaluation should play a crucial role to support effective and efficient SDG implementation. Evaluation will offer evidence-based learning on how policies and programmes delivered results and what needs to be done differently. The main principle of the 2030 Agenda is that no one should be left behind. The follow-up and review mechanisms also call for inclusiveness, participation and ownership. This is why equity-focused and gender-responsive evaluation is needed. This transformative kind of evaluation can help countries to identify structural causes of inequalities through deeper analysis of power relationship, social norms and cultural beliefs. Integrating equity-focused and gender-responsive evaluations will provide strong evidence to ensure national voluntary reviews of SDGs are leaving no one behind.
The purpose of this one day workshop is 1) to provide guidance on how to integrate an equity-focused and gender equality approach to national evaluation systems generally and 2) promote the use of equity-focused and gender-responsive evaluations to inform the national reviews of SDGs.
It is often claimed that randomized controlled trials (RCTs) for impact evaluation are the gold standard of evaluation, because they meet scientific standards for causal inference. However, in the world of evaluation practice as well as scholarly debate, interrelated terms like result, evidence and impact are controversial concepts whose meaning is often challenged and debated by groups in different epistemological and methodological traditions. Although few would deny the relevance of credibly identifying the development impact of policies and programs and the need to shift focus on results, there is great variation and discussion around what constitutes rigorous evidence of impact and what type of methodological strategies can and should be pursued.
This workshop presents a review of the theoretical and methodological foundations of experimental (and quasi-experimental, by extension) impact evaluation in order to clarify the purposes, logic and limitations of these evaluation strategies, and offer a critical reflection on the scope and limits of impact evaluation.
For the purposes of this workshop, a pluralist perspective will be adopted in order to discuss three relevant dimensions of this debate: 1) methodological diversity and complementarity, 2) ethical and logistical considerations; 3) decision-making value of impact evaluation.