1.1 Introduction

The opening paragraph of the UNDP 2008-2011 Strategic Plan states that all UNDP work—policy advice, technical support, advocacy, and contributions to strengthening coherence in global development—is aimed at one end result: “real improvements in people’s lives and in the choices and opportunities open to them.”6Improvements in people’s lives are a common goal shared by many governments and development partners across the countries in which UNDP works. This is also the reason many agencies now use the term ‘managing for development results’ or MfDR, as opposed to ‘results-based management’ or RBM in their policy documents, guidelines and statements. Traditionally, RBM approaches have focused more on internal results and performance of agencies than on changes in the development conditions of people. MfDR applies the same basic concepts of RBM—good planning, monitoring, evaluation, learning and feeding back into planning—but seeks to keep the focus on development assistance demonstrating real and meaningful results.


MfDR is also an effort to respond to the growing demands for public accountability to citizens in both the developed and developing world on how assistance is used, what results are achieved, and how appropriate these results are in bringing about desired changes in human development. This approach encourages development agencies to focus on building partnerships and collaboration and ensure greater coherence. Similarly, it promotes stronger focus on sustainability through measures that enhance national ownership and capacity development.

MfDR is RBM in action, but it is oriented more towards the external environment and results that are important to programme countries and less towards an agency’s internal performance. 

Achieving development results, as most realize, is often much more difficult than imagined. To achieve development results and changes in the quality of people’s lives, governments, UNDP and other partners will often develop a number of different plans, strategies, programmes and projects. These typically include:

  • A National Development Plan or Poverty Reduction Strategy
  • Sector-based development plans
  • A United Nations Development Assistance Framework (UNDAF)
  • A corporate strategic plan (such as the UNDP 2008-2011 Strategic Plan)
  • Global, regional and country programme documents (CPDs) and country programme action plans (CPAPs)
  • Monitoring and evaluation (M&E) frameworks and evaluation plans
  • Development and management work plans
  • Office and unit specific plans
  • Project documents and annual work plans

However, good intentions, large programmes and projects, and lots of financial resources are not enough to ensure that development results will be achieved. The quality of those plans, programmes and projects, and how well resources are used, are also critical factors for success.

To improve the chances of success, attention needs to be placed on some of the common areas of weakness in programmes and projects. Four main areas for focus are identified consistently:

  1. Planning and programme and project definition—Projects and programmes have a greater chance of success where the objectives and scope of the programmes or projects are properly defined and clarified. This reduces the likelihood of experiencing major challenges in implementation.
  2. Stakeholder involvement—High levels of engagement of users, clients and stakeholders in programmes and projects are critical to success.
  3. Communication—Good communication results in strong stakeholder buy-in and mobilization. Additionally, communication improves clarity on expectations, roles and responsibilities, as well as information on progress and performance. This clarity helps to ensure optimum use of resources.
  4. Monitoring and evaluation—Programmes and projects with strong monitoring and evaluation components tend to stay on track. Additionally, problems are often detected earlier, which reduces the likelihood of having major cost overruns or time delays later.

Good planning, combined with effective monitoring and evaluation, can play a major role in enhancing the effectiveness of development programmes and projects. Good planning helps us focus on the results that matter, while monitoring and evaluation help us learn from past successes and challenges and inform decision-making so that current and future initiatives are better able to improve people’s lives and expand their choices.

Box 1. Understanding inter-linkages and dependencies between planning, monitoring and evaluation

  1. Without proper planning and clear articulation of intended results, it is not clear what should be monitored and how; hence monitoring cannot be done well.
  2. Without effective planning (clear results frameworks), the basis for evaluation is weak; hence evaluation cannot be done well.
  3. Without careful monitoring, the necessary data is not collected; hence evaluation cannot be done well.
  4. Monitoring is necessary, but not sufficient, for evaluation.
  5. Monitoring facilitates evaluation, but evaluation uses additional new data collection and different frameworks for analysis.
  6. Monitoring and evaluation of a programme will often lead to changes in programme plans. This may mean further changing or modifying data collection for monitoring purposes.

Source: Adapted from UNEG, ‘UNEG Training—What a UN Evaluator Needs to Know?’, Module 1, 2008.

Planning can be defined as the process of setting goals, developing strategies, outlining the implementation arrangements and allocating resources to achieve those goals. It is important to note that planning involves looking at a number of different processes:

  • Identifying the vision, goals or objectives to be achieved
  • Formulating the strategies needed to achieve the vision and goals
  • Determining and allocating the resources (financial and other) required to achieve the vision and goals
  • Outlining implementation arrangements, which include the arrangements for monitoring and evaluating progress towards achieving the vision and goals

There is an expression that “failing to plan is planning to fail”. While it is not always true that those who fail to plan will eventually fail in their endeavours, there is strong evidence to suggest that having a plan leads to greater effectiveness and efficiency. Not having a plan—whether for an office, programme or project—is in some ways similar to attempting to build a house without a blueprint, that is, it is very difficult to know what the house will look like, how much it will cost, how long it will take to build, what resources will be required, and whether the finished product will satisfy the owner’s needs. In short, planning helps us define what an organization, programme or project aims to achieve and how it will go about it.

Monitoring can be defined as the ongoing process by which stakeholders obtain regular feedback on the progress being made towards achieving their goals and objectives. Contrary to many definitions that treat monitoring as merely reviewing progress made in implementing actions or activities, the definition used in this Handbook focuses on reviewing progress against achieving goals. In other words, monitoring in this Handbook is not only concerned with asking “Are we taking the actions we said we would take?” but also “Are we making progress on achieving the results that we said we wanted to achieve?” The difference between these two approaches is extremely important. In the more limited approach, monitoring may focus on tracking projects and the use of the agency’s resources. In the broader approach, monitoring also involves tracking strategies and actions being taken by partners and non-partners, and figuring out what new strategies and actions need to be taken to ensure progress towards the most important results.

Evaluation is a rigorous and independent assessment of either completed or ongoing activities to determine the extent to which they are achieving stated objectives and contributing to decision-making. Evaluations, like monitoring, can apply to many things, including an activity, project, programme, strategy, policy, topic, theme, sector or organization. The key distinction between the two is that evaluations are done independently to provide managers and staff with an objective assessment of whether or not they are on track. They are also more rigorous in their procedures, design and methodology, and generally involve more extensive analysis. However, the aims of both monitoring and evaluation are very similar: to provide information that can help inform decisions, improve performance and achieve planned results.

Box 2. The distinction between monitoring and evaluation from other oversight activities
Like monitoring and evaluation, inspection, audit, review and research functions are oversight activities, but they each have a distinct focus and role and should not be confused with monitoring and evaluation.
Inspection is a general examination of an organizational unit, issue or practice to ascertain the extent it adheres to normative standards, good practices or other criteria and to make recommendations for improvement or corrective action. It is often performed when there is a perceived risk of non-compliance.
Audit is an assessment of the adequacy of management controls to ensure the economical and efficient use of resources; the safeguarding of assets; the reliability of financial and other information; the compliance with regulations, rules and established policies; the effectiveness of risk management; and the adequacy of organizational structures, systems and processes. Evaluation is more closely linked to MfDR and learning, while audit focuses on compliance.
Reviews, such as rapid assessments and peer reviews, are distinct from evaluation and more closely associated with monitoring. They are periodic or ad hoc, often light assessments, of the performance of an initiative and do not apply the due process of evaluation or rigor in methodology. Reviews tend to emphasize operational issues. Unlike evaluations conducted by independent evaluators, reviews are often conducted by those internal to the subject or the commissioning organization.
Research is a systematic examination completed to develop or contribute to knowledge of a particular topic. Research can often feed information into evaluations and other assessments but does not normally inform decision-making on its own.

Source: UNEG, ‘Norms for Evaluation in the UN System’, 2005. Available at: http://www.unevaluation.org/unegnorms.

  

In assessing development effectiveness, monitoring and evaluation efforts aim to assess the following:

  • Relevance of UNDP assistance and initiatives (strategies, policies, programmes and projects designed to combat poverty and support desirable changes) to national development goals within a given national, regional or global context
  • Effectiveness of development assistance initiatives, including partnership strategies
  • Contribution and worth of this assistance to national development outcomes and priorities, including the material conditions of programme countries, and how this assistance visibly improves the prospects of people and their communities
  • Key drivers or factors enabling successful, sustained and scaled-up development initiatives; alternative options; and comparative advantages of UNDP
  • Efficiency of development assistance, partnerships and coordination to limit transaction costs
  • Risk factors and risk management strategies to ensure success and effective partnership
  • Level of national ownership and measures to enhance national capacity for sustainability of results

While monitoring provides real-time information required by management, evaluation provides more in-depth assessment. The monitoring process can generate questions to be answered by evaluation. Also, evaluation draws heavily on data generated through monitoring during the programme and project cycle, including, for example, baseline data, information on the programme or project implementation process, and measurements of results.