The opening paragraph of the UNDP 2008-2011 Strategic Plan states that all UNDP work—policy advice, technical support, advocacy, and contributions to strengthening coherence in global development—is aimed at one end result: “real improvements in people’s lives and in the choices and opportunities open to them.”6Improvements in people’s lives are a common goal shared by many governments and development partners across the countries in which UNDP works. This is also the reason many agencies now use the term ‘managing for development results’ or MfDR, as opposed to ‘results-based management’ or RBM in their policy documents, guidelines and statements. Traditionally, RBM approaches have focused more on internal results and performance of agencies than on changes in the development conditions of people. MfDR applies the same basic concepts of RBM—good planning, monitoring, evaluation, learning and feeding back into planning—but seeks to keep the focus on development assistance demonstrating real and meaningful results.
MfDR is also an effort to respond to the growing demands for public accountability to citizens in both the developed and developing world on how assistance is used, what results are achieved, and how appropriate these results are in bringing about desired changes in human development. This approach encourages development agencies to focus on building partnerships and collaboration and ensure greater coherence. Similarly, it promotes stronger focus on sustainability through measures that enhance national ownership and capacity development.
MfDR is RBM in action, but it is oriented more towards the external environment and results that are important to programme countries and less towards an agency’s internal performance.
Achieving development results, as most realize, is often much more difficult than imagined. To achieve development results and changes in the quality of people’s lives, governments, UNDP and other partners will often develop a number of different plans, strategies, programmes and projects. These typically include:
However, good intentions, large programmes and projects, and lots of financial resources are not enough to ensure that development results will be achieved. The quality of those plans, programmes and projects, and how well resources are used, are also critical factors for success.
To improve the chances of success, attention needs to be placed on some of the common areas of weakness in programmes and projects. Four main areas for focus are identified consistently:
Good planning, combined with effective monitoring and evaluation, can play a major role in enhancing the effectiveness of development programmes and projects. Good planning helps us focus on the results that matter, while monitoring and evaluation help us learn from past successes and challenges and inform decision-making so that current and future initiatives are better able to improve people’s lives and expand their choices.
Box 1. Understanding inter-linkages and dependencies between planning, monitoring and evaluation
Source: Adapted from UNEG, ‘UNEG Training—What a UN Evaluator Needs to Know?’, Module 1, 2008.
Planning can be defined as the process of setting goals, developing strategies, outlining the implementation arrangements and allocating resources to achieve those goals. It is important to note that planning involves looking at a number of different processes:
There is an expression that “failing to plan is planning to fail”. While it is not always true that those who fail to plan will eventually fail in their endeavours, there is strong evidence to suggest that having a plan leads to greater effectiveness and efficiency. Not having a plan—whether for an office, programme or project—is in some ways similar to attempting to build a house without a blueprint, that is, it is very difficult to know what the house will look like, how much it will cost, how long it will take to build, what resources will be required, and whether the finished product will satisfy the owner’s needs. In short, planning helps us define what an organization, programme or project aims to achieve and how it will go about it.
Monitoring can be defined as the ongoing process by which stakeholders obtain regular feedback on the progress being made towards achieving their goals and objectives. Contrary to many definitions that treat monitoring as merely reviewing progress made in implementing actions or activities, the definition used in this Handbook focuses on reviewing progress against achieving goals. In other words, monitoring in this Handbook is not only concerned with asking “Are we taking the actions we said we would take?” but also “Are we making progress on achieving the results that we said we wanted to achieve?” The difference between these two approaches is extremely important. In the more limited approach, monitoring may focus on tracking projects and the use of the agency’s resources. In the broader approach, monitoring also involves tracking strategies and actions being taken by partners and non-partners, and figuring out what new strategies and actions need to be taken to ensure progress towards the most important results.
Evaluation is a rigorous and independent assessment of either completed or ongoing activities to determine the extent to which they are achieving stated objectives and contributing to decision-making. Evaluations, like monitoring, can apply to many things, including an activity, project, programme, strategy, policy, topic, theme, sector or organization. The key distinction between the two is that evaluations are done independently to provide managers and staff with an objective assessment of whether or not they are on track. They are also more rigorous in their procedures, design and methodology, and generally involve more extensive analysis. However, the aims of both monitoring and evaluation are very similar: to provide information that can help inform decisions, improve performance and achieve planned results.
Box 2. The distinction between monitoring and evaluation from other oversight activities
Like monitoring and evaluation, inspection, audit, review and research functions are oversight activities, but they each have a distinct focus and role and should not be confused with monitoring and evaluation.
Inspection is a general examination of an organizational unit, issue or practice to ascertain the extent it adheres to normative standards, good practices or other criteria and to make recommendations for improvement or corrective action. It is often performed when there is a perceived risk of non-compliance.
Audit is an assessment of the adequacy of management controls to ensure the economical and efficient use of resources; the safeguarding of assets; the reliability of financial and other information; the compliance with regulations, rules and established policies; the effectiveness of risk management; and the adequacy of organizational structures, systems and processes. Evaluation is more closely linked to MfDR and learning, while audit focuses on compliance.
Reviews, such as rapid assessments and peer reviews, are distinct from evaluation and more closely associated with monitoring. They are periodic or ad hoc, often light assessments, of the performance of an initiative and do not apply the due process of evaluation or rigor in methodology. Reviews tend to emphasize operational issues. Unlike evaluations conducted by independent evaluators, reviews are often conducted by those internal to the subject or the commissioning organization.
Research is a systematic examination completed to develop or contribute to knowledge of a particular topic. Research can often feed information into evaluations and other assessments but does not normally inform decision-making on its own.
Source: UNEG, ‘Norms for Evaluation in the UN System’, 2005. Available at: http://www.unevaluation.org/unegnorms.
In assessing development effectiveness, monitoring and evaluation efforts aim to assess the following:
While monitoring provides real-time information required by management, evaluation provides more in-depth assessment. The monitoring process can generate questions to be answered by evaluation. Also, evaluation draws heavily on data generated through monitoring during the programme and project cycle, including, for example, baseline data, information on the programme or project implementation process, and measurements of results.
6. UNDP, ‘UNDP Strategic Plan, 2008-2011: Accelerating Global Progress on Human Development’, Executive Board Document DP/2007/43, (pursuant DP/2007/32), reissued January 2008.