- About IEO
- NEC Conference
- Media Centre
- Evaluation Resource Centre
Cet atelier sera présenté en français
Cet atelier de formation aborde les différentes conceptualisations de l’utilisation de l’évaluation, le processus d’élaboration des politiques publiques de développement et les facteurs en jeu pour qu’une évaluation gagne en influence. Les participants reçoivent des lignes directrices précises et systématiques sur la façon de prendre en compte les obstacles et faciliter l’utilisation d’évaluation au cours de la phase de planification des évaluations. Les participants apprendront aussi comment développer un plan d’influence politique explicite pour toutes les évaluations menées afin d’augmenter la probabilité d’utilisation de ces évaluations par le public-cible, en particulier les décideurs au sein du gouvernement.
Multilateral development banks (MDBs) undertake interventions in developing countries through both the private and public sectors. MDB support to the public sector is still dominant, although private sector interventions have shown a steep growth over recent years. While public sector operations are often initiated by the MDBs in cooperation with national or local governments, private sector interventions involve corporate sponsors who control their project initiatives. The relationship of sponsors with the MDB is often long-term as the current client-oriented model and strategic intent of the two private sector-specialized institutions among the MDBs, the International Finance Corporation (IFC) as well as the European Bank for Reconstruction and Development (EBRD), indicate. The financial instruments to support the development of the private sector are mostly of a short to medium term nature. This workshop discusses the specificity and dynamics of private sector evaluation, thereby highlighting the methodological approaches and evaluation practices that are used by MDBs for this type of operations at the institutional and project levels.
The effectiveness of this universe of private sector interventions should not be judged by their financial return only. On the one hand, investment operations certainly entail a profitability angle, but the rationale for participation of the public sector in supporting them is rather based on their broader social returns. In other words, institutions intervening in this space do so with two sorts of bottom lines in mind: (i) financial and (ii) economic/social/environmental. For a view on the first, the market may suffice, for the combined effect, evaluation is indispensable.
The workshop will start with a presentation on methodologies used in private sector evaluation and compare them with public sector evaluation methodologies and practices. In this respect, more than 20 years of experience in developing good practice standards in the Evaluation Cooperation Group (ECG) will be presented. To make the workshop as interactive as possible, two case studies will be presented. The main themes of the case studies will be the evaluation of financial intermediaries and evaluating direct equity investments. Finally, as learning from experience is one of the main features of development evaluation, the workshop will discuss adaptive learning as an institutional strategy.
Interventions are theories and evaluation is the test. This well-known reference is indicative of an influential school of thought and practice in evaluation, often called theory-driven or theory-based evaluation. While having been around for more than four decades, over the last decade theory-based evaluation has received new impetus and has become part and parcel of the toolkit of program evaluators across the globe.
The past decade has also seen a dramatic increase in impact evaluation debates and practices. While theory-based evaluation has often been cast as an alternative to quantitative counterfactual-based impact evaluation, in practice the two can reinforce each other. At the same time, the scope for applying different expressions of theory-based evaluations is much broader than impact evaluation only. The workshop will address the following main themes:
1. What is theory-based evaluation and why is it important?
2. What are useful principles for reconstructing a program theory? 3. How can we apply theory-based evaluation in practice?
After this course, participants have developed an initital (but sound) understanding of the role of theory in evaluation and how to apply theory-based evaluation in practice.
- Short interactive lectures
- Group exercise and presentations on the basis of an empirical case
Course level: Beginning/intermediate
Este taller tendrá lugar en español.
La experiencia ha demostrado que empoderar a las mujeres contribuye a promover el crecimiento económico y el desarrollo de los países. Pese a los avances logrados en las últimas décadas, la desigualdad de género sigue siendo un obstáculo para la plena participación de las mujeres en la actividad económica, el desarrollo social y la toma de decisiones en el ámbito público. Para avanzar en el logro de los objetivos de desarrollo sostenible, es crucial superar la desigualdad y eliminar toda forma de discriminación basada en el género.
El enfoque de género ha estado presente al más alto nivel en la evaluación y medición de los ODS desde la adopción de la nueva agenda de desarrollo. En los últimos años, han aumentado los esfuerzos para definir indicadores y metas para medir el avance hacia los ODS con una perspectiva de género. Sin embargo, estos esfuerzos son todavía insuficientes.
Este taller pretende intercambiar reflexiones y prácticas y hacer una lectura crítica sobre los indicadores y metodologías que podrían contribuir a un uso más intensivo del enfoque de género en los procesos de evaluación y seguimiento de los ODS en los próximos años. Mediante una metodología participativa, se presentarán, entre otros contenidos, estrategias para la aplicación del enfoque de género en las políticas públicas y herramientas de evaluación con enfoque de género, con aplicaciones prácticas y casos reales.
In a complex world, knowledge is needed to run effective public policies. However, the flow of knowledge between the experts that produce the knowledge, and its users, who are the decision-makers, is not a straightforward process.
A knowledge broker is a public professional acting as an intermediary, ensuring that the message from studies to decision-making gets across. Knowledge brokering helps decision-makers to be better equipped to create evidence-based policies that are better designed and will more successfully serve the citizens.
The training is designed as a one-day game-based session. Participants play the role of managers within their regional evaluation units. Their mission is to help different decision-makers in successfully implementing socio-economic projects.
Participants can learn six key knowledge brokering skills:
1. Identifying knowledge needs of policy actors
2. Acquiring credible studies
3. Combining results into policy arguments
4. Reaching users with appropriate dissemination strategies
5. Delivering research results at the right moment of decision-making cycle
6. Managing a unit and its network with limited resources
The simulation game is run in turns which are followed by detailed feedback and debriefing sessions grounded in latest empirical research on evidence used in decision-making.
В рамках этого семинара участники ознакомятся с историческими и административными предпосылками создания системы оценки эффективности деятельности государственных органов Казахстана. Казахстанская система оценки действует уже более 7 лет, от трех государственных органов, оцененных в 2011 году, до более чем 40 органов, проходящих ежегодную оценку. Системой разработан уникальный подход к методологии оценки. Более того, она создала новую культуру оценки в бюрократическом аппарате правительства. Система зарекомендовала себя как эффективный инструмент совершенствования системы государственного управления и стимулирования эффективной работы министерств и местных государственных органов. Например, до запуска Системы, в 2006 году граждане заполняли более 23 млн. запросов и жалоб о различным вопросам госуправления. В 2013 году количество жалоб снизилось до 10,8 млн., а в 2016 году – до 2,3 млн. Семинар начнется с презентации опыта внедрения национальной системы оценки в Правительстве Республики Казахстан. Кроме того, будет презентован опыт стран региона, участвующих в Региональном хабе в сфере государственной службы в Астане. Семинар послужит диалоговой площадкой для обмена практическим опытом, инновациями и перспективами в разработке национальных систем оценки и способствует налаживанию партнерства между участниками. Соорганизаторы: Правительство Республики Казахстан, Региональный хаб в сфере государственной службы в Астане
Within this workshop, participants will immerse into historical and administrative background of establishing a performance evaluation system for Kazakhstani government bodies. The Kazakhstani Evaluation System is on for more than 7 years, scaling its work from just 3 state bodies assessed in 2011 to more than 40 agencies going through the annual assessment now.
The System has developed its own unique approaches to methodology, moreover it launched a new culture of evaluation within the bureaucratic governmental apparatus. The System proved itself as an effective tool for improving governance and stimulating the ministries and municipal state bodies to perform better. For instance, before the launch of the System, in 2006 citizens used to fill over 23 mln requests and complaints against various governance issues. In 2013 number of complaints dropped to 10.8 mln, in 2016 – to 2.3 mln.
The workshop will start with a presentation on national evaluation system in the Government of the Republic of Kazakhstan. It will also showcase the experience of countries of the region that participate in the Regional Hub of Civil Service in Astana. The workshop will provide a dialogue platform for the exchange of practical experience, innovations and perspectives in elaborating national evaluation systems and facilitate networking and partnership building between participants.
This workshop is designed for seasoned development evaluators and government officials responsible for commissioning and supervising evaluations.
Workshop content: Using exercises with a simulated case study, participants spend 60% of the workshop doing exercises in small groups as a means of working through the Outcome Harvesting steps:
1. Design the Outcome Harvest with key evaluative questions based on the principal uses of the primary users of the process and findings.
2. Review documentary material to draft potential outcome descriptions of who changed their behavior, how the intervention contributed and other relevant data such as the significance of the outcome.
3. Engage with human sources of information to complete outcome descriptions and formulate additional ones
4. Substantiate the veracity of select outcome descriptions with independent third parties and deepen understanding.
5. Analyze and interpret the outcome information to answer the evaluation questions. 6. Support the use of the Outcome Harvest findings.
Many evaluators highlight challenges in evaluation use. Improving evaluation utilization is a task that is often far larger than the individual evaluator, it is a systems issue. It is important for evaluation commissioners and managers to move towards systematic evaluation practice that meets the demands of the organization and increase utilization of evaluation findings.
Evaluation system diagnostics are helpful to (re)align evaluation practice with demands from across different political and organizational levels and thereby contribute to improving the utilization and relevance of evaluations. This workshop presents a set of diagnostic tools and concepts that seek to identify different demands for evaluation.
This workshop introduces a diagnostic toolset that has been developed through eleven analytical processes of government evaluation systems in Africa, application in international NGOs and in evaluation systems development in the UK Government. These tools are designed to answer three questions:
(i) What is value of evaluation in the context?
(ii) What demands emerge from different stakeholders?
(iii) What is the relevance of the current evaluation systems in responding to these demands?
Answers to these questions are of high importance for the development of evaluation systems as they inform refinement of the political and technical aspects of the evaluation system, namely: policy, incentives, procurement, competencies and quality assurance mechanisms.
The workshop presents a range of tools and concepts that can identify new opportunities for evaluation. In considering the three questions above, participants will engage with:
The workshop will focus on providing the rationale for why it’s especially important to think about building for evidence, right from the beginning and why environment and climate change programmes have special challenges that make it more important to think about this, right from the beginning. The first challenge in this process is recognizing how good data and designs can help inform good policy and strategy for climate change and environment programmes. The second challenge is recognizing where theories of change can play a critical role in helping not just with project/program design but also in evaluability. The third is understanding that in climate change and environment related programmes, thinking about the ‘last mile’ is especially critical: Cook stoves may be excellently well designed and the supply chain may have been set up very well but what are the factors that lead to households ‘adopting’ new and efficient cook stoves? What are we learning from other sectors and what can we do to close this last mile gap? The workshop will close with discussing biases that may enter in our assessments of how effective programs have been, unless these are well thought out.
This workshop will focus on a range of new technological tools and examine how they can be used to improve applied research and program evaluations. Specifically, we will explore the application of free or inexpensive software to engage clients and a range of stakeholders, collect research and evaluation data, formulate and prioritize research and evaluation questions, express and assess logic models and theories of change, track program implementation, provide continuous improvement feedback, determine program outcomes/impact, and to present data and findings. Participants will be given information on how to access tools such as Crowdsourcing, Geographical Information Systems (GIS), data visualization, and interactive conceptual framing software to improve the quality of their applied research and evaluation projects.
The Agenda 2030 for Sustainable Development puts forward “a plan for action for people, planet and prosperity” and “seeks to strengthen universal peace in larger freedom” through strategic partnerships. It includes a vision and principles, a results framework of global SDGs, a framework for means of implementation and follow-up and review mechanism.
This means evaluation should play a crucial role to support effective and efficient SDG implementation. Evaluation will offer evidence-based learning on how policies and programmes delivered results and what needs to be done differently. The main principle of the 2030 Agenda is that no one should be left behind. The follow-up and review mechanisms also call for inclusiveness, participation and ownership. This is why equity-focused and gender-responsive evaluation is needed. This transformative kind of evaluation can help countries to identify structural causes of inequalities through deeper analysis of power relationship, social norms and cultural beliefs. Integrating equity-focused and gender-responsive evaluations will provide strong evidence to ensure national voluntary reviews of SDGs are leaving no one behind.
The purpose of this one day workshop is 1) to provide guidance on how to integrate an equity-focused and gender equality approach to national evaluation systems generally and 2) promote the use of equity-focused and gender-responsive evaluations to inform the national reviews of SDGs.
It is often claimed that randomized controlled trials (RCTs) for impact evaluation are the gold standard of evaluation, because they meet scientific standards for causal inference. However, in the world of evaluation practice as well as scholarly debate, interrelated terms like result, evidence and impact are controversial concepts whose meaning is often challenged and debated by groups in different epistemological and methodological traditions. Although few would deny the relevance of credibly identifying the development impact of policies and programs and the need to shift focus on results, there is great variation and discussion around what constitutes rigorous evidence of impact and what type of methodological strategies can and should be pursued.
This workshop presents a review of the theoretical and methodological foundations of experimental (and quasi-experimental, by extension) impact evaluation in order to clarify the purposes, logic and limitations of these evaluation strategies, and offer a critical reflection on the scope and limits of impact evaluation.
For the purposes of this workshop, a pluralist perspective will be adopted in order to discuss three relevant dimensions of this debate: 1) methodological diversity and complementarity, 2) ethical and logistical considerations; 3) decision-making value of impact evaluation.