PART ONE
CHAPTER 4. OBJECTIVES

The focus of monitoring and evaluation on relevance, performance and success is strategically linked to the objective of ensuring that UNDP-assisted programmes and projects produce sustainable results that benefit the target groups and the larger communities of which they are a part. Both functions contribute to the achievement of this objective by supporting decision-making, accountability, learning and capacity development.

Decision-making
Decision-making may be linked to interventions at the macro, meso and micro levels. Macro-level decisions relate to policies that cut across sectors and affect the overall development process. Decisions made at the meso and micro levels pertain to programmes and projects, respectively.

UNDP monitoring and evaluation actions support decision-making at all three levels, e.g., policy and strategic evaluations at the macro level and monitoring and evaluation of programmes and projects, individually and in clusters, at the other two levels. However, many of these actions are currently concentrated at the meso and micro levels.

The data and information collected during monitoring and evaluations constitute a critical foundation for action by programme managers and stakeholders, who need to be able to identify evolving problems and decide on crucial strategies, corrective measures, and revisions to plans and resource allocations pertaining to the activities in question.

Even after the completion of a programme or project, monitoring and evaluation can contribute significantly to decision-making. For instance, terminal reports, considered to be part of the monitoring function, can contain recommendations for follow-up activities. Post-programme or post-project monitoring can lead to the recommendation of measures to improve the sustainability of results produced by the programme or project.

Accountability
Monitoring and evaluation provide critical assessments that demonstrate whether or not programmes or projects satisfy target group needs and priorities. They help to establish substantive accountability by generating answers to questions such as:

As for the question "Who is accountable?", monitoring and evaluation must be used to support accountability at different management levels within UNDP, i.e., the accountability of resident representatives, Senior Management at headquarters, the Administrator and the Executive Board.

Learning
The learning derived from monitoring and evaluation can improve the overall quality of ongoing and future programmes and projects. This is particularly significant when one considers UNDP support for innovative, cutting-edge programmes and projects with all the attendant risks and uncertainties.

The learning that occurs through monitoring applies particularly to ongoing programmes or projects. Mistakes are made and insights are gained in the course of programme or project implementation. Effective monitoring can detect early signs of potential problem and success areas. Programme or project managers must act on the findings, applying the lessons learned to modify the programme or project. This learning by doing serves the immediate needs of the programme or project, but it can also provide feedback for future programming.

On the other hand, the learning that results from terminal and ex-post evaluations is relevant particularly to future programmes and projects. In such cases, it can be more definitive, especially if evaluations are conducted for clusters of projects or programmes from which lessons can be extracted for broader application. The lessons, which may apply to a given sector, theme or geographical area, such as a country or region, can, of course, be adapted or replicated depending on the context.

Learning from monitoring and evaluation must be incorporated into the overall programme or project management cycle through an appropriate feedback system (see chapters five and 15) and support decision-making at various levels, as described above.

Capacity Development
Monitoring and evaluation must contribute to the UNDP mission to achieve SHD by assisting programme countries to develop their capacity to manage development. Improving the decision-making process, ensuring accountability to target groups or stakeholders in general, and maximizing the benefits offered by learning from experience can all contribute to strengthening capacities at the national, local and grass-roots levels, including, in particular, the capacities for monitoring and evaluation.

National execution, the current modality for UNDP-assisted programmes and projects, implies a corresponding shift to a bipartite mechanism for monitoring and evaluation, with the programme country Government and UNDP as major partners. UNDP monitoring and evaluation activities can serve as entry points for assisting Governments to strengthen their monitoring and evaluation capacities since they bear primary responsibility for monitoring and evaluating their programmes and projects.

PART ONE
CHAPTER 5. MONITORING AND EVALUATION AND THE PROGRAMME/PROJECT CYCLE

Monitoring and evaluation are integral parts of the programme/project management cycle . On the one hand, monitoring and evaluation are effective tools for enriching the quality of interventions through their role in decision-making and learning. On the other hand, the quality of project design (e.g., clarity of objectives, establishment of indicators) can affect the quality of monitoring and evaluation. Furthermore, the experience gained from implementation can contribute to the continuing refinement of monitoring and evaluation methodologies and instruments.

To maximize the benefits of monitoring and evaluation, the recommendations and lessons learned from those functions must be incorporated into the various phases of the programme or project cycle.

PRE-FORMULATION: SEARCHING FOR LESSONS LEARNED
At the identification and conceptualization stages of a programme or project, the people responsible for its design must make a thorough search of lessons learned from previous or ongoing UNDP-assisted programmes and projects and from the field of development cooperation at large.

A wide variety of sources of information are available in UNDP, other donor institutions, government offices and elsewhere. Those sources take the form of printed material, electronic media such as the Internet and computerized databases (see chapter 15). Databases such as the UNDP CEDAB and the OECD/DAC database facilitate the search for relevant lessons extracted from evaluation reports since the lessons can be sorted using multiple criteria (e.g., sector, country, region).

FORMULATION: INCORPORATING LESSONS LEARNED AND PREPARING A MONITORING AND EVALUATION PLAN
Relevant lessons learned from experience with other programmes and projects must be incorporated in the design of a new programme or project.

A monitoring and evaluation plan must also be prepared as an integral part of the programme or project design. Those responsible for programme or project design must:

NOTE:
A monitoring and evaluation plan is not intended to be rigid or fixed from the outset; rather, it should be subject to continuous review and adjustment as required owing to changes in the programme or project itself.

BOX 3. MONITORING AND EVALUATION PLANNING FRAMEWORK



The appraisal and approval of programmes and projects must ensure that appropriate lessons and a monitoring and evaluation plan are incorporated in the programme or project design.

IMPLEMENTATION: MONITORING AND EVALUATION AS SUPPORT TO DECISION-MAKING AND LEARNING
As noted earlier, since monitoring is an ongoing process, it can reveal early signs of problems in implementation. This information can serve as a basis for corrective actions to ensure the fulfilment of programme or project objectives. Areas of success can also be revealed through monitoring, enabling their reinforcement.

The contribution made by both monitoring and evaluation to lessons learned was also noted earlier. Thus, programme managers and other stakeholders must make certain that a learning culture is maintained throughout the imple-mentation of a programme or project. Such a culture should motivate those involved in programme or project management to learn from their experience and apply those lessons to the improvement of the programme or project. Learning can be enhanced through participatory mechanisms that enable the various stakeholders to share their views and provide feedback when and where it is needed (see chapters nine and 15).

PROGRAMME OR PROJECT COMPLETION: DISSEMINATION OF LESSONS LEARNED
Upon termination of a programme or project, stakeholders as a group must take stock of the experience that has been gained: successes and failures, best and worst practices, future challenges and constraints. Special emphasis should be placed on identifying the lessons that have the potential for wider application, determining which particular user groups could benefit most from such lessons, and ascertaining the best way to disseminate the lessons to the target groups (see chapter 15).

PART ONE
CHAPTER 6. CONSTRAINTS AND CHALLENGES

Certain conceptual and methodological constraints and challenges are associated with the monitoring and evaluation functions. Effective monitoring and evaluation can be achieved only through a careful, pragmatic approach to addressing these limitations.

DEPENDENCE ON CLARITY OF OBJECTIVES AND AVAILABILITY OF INDICATORS
Monitoring and evaluation are of little value if a programme or project does not have clearly defined objectives and appropriate indicators of relevance, performance and success. Any assessment of a programme or project, whether through monitoring or evaluation, must be made vis-à-vis the objectives, i.e., what the interventions aim to achieve. Indicators are the critical link between the objectives (which are stated as results to be achieved) and the types of data that need to be collected and analysed through monitoring and evaluation. Hence, lack of clarity in stating the objectives and the absence of clear key indicators will limit the ability of monitoring and evaluation to provide critical assessments for decision-making, accountability and learning purposes.

TIME CONSTRAINTS AND THE QUALITY OF MONITORING AND EVALUATION
Accurate, adequate information must be generated within a limited time frame. This may not be a very difficult task in the case of monitoring actions since programme or project managers should be able to obtain or verify information as necessary. However, the challenge is greater for UNDP evaluation missions conducted by external consultants. The average duration of such missions is three weeks; however, this should not be considered as the norm. UNDP country offices, in consultation with the Government and UNDP units at headquarters (i.e., regional bureaux and OESP), should have the flexibility to establish realistic timetables for these missions, depending on the nature of the evaluations. Budgetary provisions must be made accordingly.

OBJECTIVITY AND INDEPENDENCE OF EVALUATORS AND THEIR FINDINGS
No evaluator can be entirely objective in his or her assessment. It is only natural that even external evaluators (i.e., those hired from outside the Government or UNDP) could have their own biases or preconceptions. The composition of the evaluation team is therefore important in ensuring a balance in views. It is also crucial that evaluators make a distinction between facts and opinions. External evaluators must seek clarification with the Government or other concerned parties on matters where there are seeming inconsistencies to ensure the accuracy of the information. This applies particularly to understanding the cultural context of the issues at hand. In cases where opinions diverge, the external evaluators must be willing to consider the views of others in arriving at their own assessments.

LEARNING OR CONTROL?
Traditionally, monitoring and evaluation have been perceived as forms of control mainly because their objectives were not clearly articulated and understood. Thus, the learning aspect of monitoring and evaluation needs to be stressed along with the role that these functions play in decision-making and accountability. In the context of UNDP, the contribution of learning to the building of government capacity to manage development should be emphasized.

FEEDBACK FROM MONITORING AND EVALUATION
Monitoring and evaluation can provide a wealth of knowledge derived from experience with development cooperation in general and specific programmes and projects in particular. It is critical that relevant lessons be made available to the appropriate parties at the proper time. Without good feedback, monitoring and evaluation cannot serve their purposes. In particular, emphasis must be given to drawing lessons that have the potential for broader application, i.e., those that are useful not only to a particular programme or project but also to related interventions in a sector, thematic area or geographical location (see chapter nine).

RESPONSIBILITIES AND CAPACITIES
Governments usually must respond to a variety of monitoring and evaluation requirements from many donors, including UNDP. This situation is being partially addressed through the harmonization efforts of United Nations agencies, specifically those that are members of the Joint Consultative Group on Policy (JCGP).

Within the context of national execution in particular, there should be only one monitoring and evaluation system, namely, the national monitoring and evaluation system of the Government. The UNDP monitoring and evaluation system and those of other donors should be built upon that national system to eliminate duplication and reduce the burden on all parties concerned. Not all governments, however, may have the full capacity to carry out the responsibilities for monitoring and evaluation adequately. In such cases, UNDP should assist the governments to strengthen their monitoring and evaluation capacities.