3.2 Monitoring and evaluation framework

A clear framework, agreed among the key stakeholders at the end of the planning stage, is essential in order to carry out monitoring and evaluation systematically. This framework serves as a plan for monitoring and evaluation, and should clarify:

  • What is to be monitored and evaluated
  • The activities needed to monitor and evaluate
  • Who is responsible for monitoring and evaluation activities
  • When monitoring and evaluation activities are planned (timing)
  • How monitoring and evaluation are carried out (methods)
  • What resources are required and where they are committed

In addition, relevant risks and assumptions in carrying out planned monitoring and evaluation activities should be seriously considered, anticipated and included in the M&E framework.

In general, the M&E framework has three main components:

  1. Narrative component—This describes how the partners will undertake monitoring and evaluation and the accountabilities assigned to different individuals and agencies. For example, at the UNDAF or national result level, it is necessary to engage with national monitoring committees or outcome level groups (e.g. sector arrangements) as well as with UN inter-agency monitoring working groups. If these do not exist, there might be a need to establish such structures for effective monitoring and evaluation. In addition the narrative should also reflect:
    1. Plans that may be in place to strengthen national or sub-national monitoring and evaluation capacities
    2. Existing monitoring and evaluation capacities and estimate the human, financial and material resource requirements for its implementation
  2. Results framework—This should be prepared in the planning stage as described in Chapter 2.
  3. Planning matrices for monitoring and evaluation—These are strategic and consolidate the information required for monitoring and evaluation for easy reference.
The planning matrix for monitoring in Table 14 is illustrative for UNDP and could be used at the country, regional and global programme level to determine what needs to be monitored (a completed example of Table 14 is given in Table 15). This matrix should be adapted as determined by local circumstances andconditions. In some cases, the columns could be modified to cover results elements such as outcomes, outputs, indicators, baselines, risks and assumptions separately.

The need for an M&E framework applies for both programmes and projects within a programme. Therefore both programmes and projects should develop M&E frameworks in their planning stages. The project-level M&E framework should cascade from the programme level M&E framework and could contain more detailed information on monitoring and evaluation tasks that apply specifically to respective projects. Conversely, the programme-level framework builds upon the project-level frameworks. Monitoring and evaluation activities should be seen as an integral component of programme and project management. They take place throughout the programme and project cycles and should be reviewed and updated regularly (at least annually, for example at the time of annual reviews).

Specific considerations for planning evaluations

It is mandatory for UNDP to present an evaluation plan to its Executive Board with each country, regional and global programme document considered for approval. The evaluation plan is a component of the M&E framework and should include those evaluations that can be foreseen at the end of the programme planning stage. The plan should be strategic, including a selection of evaluations that will generate the most critical and useful information for UNDP and its partners in decision-making.

The initial evaluation plan should, at a minimum, include all mandatory evaluations. For programme units in UNDP, outcome evaluations and project evaluations required by partnership protocols such as the Global Environment Facility are mandatory. The evaluation plan is not a static document. It should be reviewed as part of the M&E framework and refined as needed during programme implementation. For example, as new projects are designed and the needs for evaluations are identified, these new evaluations should be added to the evaluation plan.

Table 14. Planning matrix for monitoring 24

Expected Results (Outcomes & Outputs)

Indicators (with Baselines & Indicative Targets) and Other Key Areas to Monitor

M&E Event with Data Collection Methods

Time or Schedule and Frequency

Responsibilities

Means of Verification:
Data Source and Type

Resources

Risks

Obtained from development plan and results framework.

From results framework.
Indicators should also capture key priorities such as capacity development and gender.
In addition, other key areas needs to be monitored such as the risks identified in the planning stage as well as other key management needs.

How is data to be obtained?
Example: through a survey, a review or stakeholder meeting, etc.

Level of detail that can be included would depend on the practical needs.

In UNDP, this information can also be captured in the Project Monitoring Schedule Plan from Atlas.

Who is responsible for organizing the data collection and verifying data quality and source?

Systematic source and location where you would find the identified and necessary data such as a national institute, or DevInfo.

Estimate of resources required and committed for carrying out planned monitoring activities.

What are the risks and assumptions for carrying out the planned monitoring activities?
How may these affect the planned monitoring events and quality of data?

 

 

 

 

 

 

 

 

Table 15. Illustrative example of planning matrix for monitoring: Enhanced capacity of electoral management authority

Expected Results (Outcomes & Outputs)

Indicators (with Baselines & Indicative Targets) and Other Key Areas to Monitor

M&E Event with Data Collection Methods

Time or Schedule and Frequency

Responsibilities

Means of Verification:
Data Source and Type

Resources

Risks

Outcome 1: Enhanced capacity of electoral management authority to administer free and fair elections
1.1. Advocacy campaign aimed at building consensus on need for electoral law and system reform implemented

1.2. Electoral management authority has adequate staff and systems to administer free and fair elections

1.3. Training programme on use of new electoral management technology designed and implemented for staff of electoral management authority

Public perception of capacity of electoral management authority to administer free and fair elections (disaggregated by gender, population group, etc.)

Baseline: 40% of public had confidence in electoral management authority as of 2008 (50% men, 30% women, 20% indigenous populations)

Target: 70% of overall population have confidence in electoral management authority by 2016 (75% men, 65% women, 60% indigenous populations)

1. Surveys

 

 

 

2. Annual Progress Reviews

 

 

3. Joint field visits to five regions

 

 

 

 

4. …

1. All surveys will be completed six months prior to the completion of activities

 

2. Progress reviews on achievement of all connected outputs will be held jointly in the fourth quarter

3. Two field visits will be held prior to the final survey and three after it

 

 

4. …

1. National Office of Statistics will commission survey; external partners, UNDP, and the World Bank will provide technical resources as needed through their assistance for capacity development

2. Progress Reviews will be organized by Elections Authority

 

3. Field visits will be organized by Elections Authority; Elections Authority  will ensure meetings with a representative cross section of stakeholders; at least two external partners will participate in a given joint field visit

4. …

1.1 Data and analysis of surveys will be available in (a) report for public

And (b) on Websites of  National Office of Statistics and  Elections Authority

 

2.1 Annual Progress Reports

2.2 Minutes of Annual Progress Reviews

3. Records of joint field visits will be available on website of  Elections Authority

 

4. …

1. Resources estimated at USD 0.2 million for the survey will be provided by the European Union

 

 

2. Resources for M&E activities will be made available in World Bank assistance project

 

3. Cost of external partners’ participation will be met by each respective partner. Other logistical costs will be funded from World Bank project

4. …

1. It is assumed that capacity development activities within National Office of Statistics required for carrying out the survey will be completed one year in advance to actual survey; if there are delays, then a private company could be contracted to carry out the survey

 

Outcome 2: Increased participation by women and indigenous populations in national and local electoral processes in five regions by 2016

2.1. Revised draft legislation on rights of women and indigenous populations to participate in elections prepared

Percentage of eligible women registered to vote in five regions

Baseline: 30% of eligible women registered in the five regions as of 2008

Target: 60% registration of eligible women in the five regions by 2016

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


Table 16. Evaluation plan

Evaluation Title

Partners
(Joint Evaluation)

Strategic Plan Results Area

CPD/CPAP Outcome

Planned Completion Date

Key Evaluation Stakeholders

Source of Funding for Evaluation

Mandatory Evaluation(Y/N)

Outcome Evaluations

Midterm Outcome Evaluation of the Poverty Reduction Programme

N/A

Poverty Reduction & MDGs

1

June 2010

Ministry of Planning, civil society groups, donors, UNDP, communities

M&E Project

Y

Midterm Outcome Evaluation of the Governance Programme

DFID (donor)

Democratic Governance

2

June 2010

Election Authority, Parliament, Ministry of Law, DFID, UNDP

DFID, related UNDP projects

Y

Outcome Evaluation: Energy and Environment Portfolio

Ministry of Environment

Environment and Sustainable Development

3

December 2011

Ministry of Environment, NGOs, donors, UNDP, communities

Biodiversity Project, Sust. Energy Project

Y

Project

Microfinance Sector Pilot Project

UNCDF

Poverty Reduction: Promoting Inclusive Growth

1.3

March 2010

Microfinance Apex Org., Ministry of Finance, UNCDF, UNICEF

Project budget

N

Biodiversity Project (Global Environment Fund)

N/A

Environment: Mobilizing Environ. Financing

3.2

May 2011

Ministry of Environment, NGOs, donors, UNDP, communities

Project budget

Y

Strengthening of the Electoral Process Project

N/A

Democratic Governance: incl. participation

2.4

September 2009

Election Authority, donors, UNDP, public

Project budget

Y

Mainstreaming Disaster Risk Reduction Project

N/A

Crisis Prevention and Recovery:

4.2

June 2011

Ministry of Disaster Management, implementing NGOs, European Community (donor), UNDP

Project budget

Y

Other Evaluations

UNDAF Midterm Evaluation

All resident UN organizations

N/A

All

December 2009

Government, UN organizations

M&E Project

N


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


Note: DFID stands for Department for International Development (UK); UNCDF, United Nations Capital Development Fund; UNICEF, United Nations Children’s Fund.

After a country, regional or global programme is approved, the respective programme unit enters the evaluation plan in the Evaluation Resource Centre (ERC) for tracking. 25 As the units exercising oversight responsibility, the regional bureaux use the evaluation plan submitted by the programme units as the basis for assessing compliance. The Evaluation Office reports on evaluation compliance directly to the UNDP Executive Board in its Annual Report on Evaluation.

UNDP programme units are required to select and commission evaluations that provide substantive information for decision-making. In deciding what to evaluate, the programme units should first determine the purpose of the evaluation and other factors that may influence the relevance and use of proposed evaluations. In general, for accountability purposes, at least 20 percent to 30 percent of the entire programme portfolio should be subject to evaluation.

Evaluations generally require significant resources and time. Therefore, every evaluation must be justified and used in an optimal way. Programme units together with key stakeholders should consider the following points in developing an evaluation plan:

  • Uses, purpose and timing of evaluation—Evaluations should be proposed only when commissioning programme units and stakeholders are clear at the onset about why the evaluation is being conducted (the purpose), what the information needs are (demand for information), who will use the information, and how the information will be used. Such information can be derived from a shared vision of success, as expressed in the results or outcome model at the planning stage. The intended use determines the timing of an evaluation, its methodological framework, and level and nature of stakeholder participation. The timing of an evaluation should be directly linked to its purpose and use. To ensure the relevance of an evaluation and effective use of evaluation information, the evaluation should be made available in a timely manner so that decisions can be made informed by evaluative evidence.26
  • Resources invested—An area (thematic or programmatic area, outcome or project) in which UNDP has invested significant resources may be subject to an evaluation as there may be greater accountability requirements.
  • The likelihood of future initiatives in the same area—Evaluations are an important means of generating recommendations to guide future work. An evaluation enables the programme unit to take stock of whether the outputs have contributed to the outcome and whether UNDP has crafted an effective partnership strategy. When selecting an initiative to be evaluated, look for one in an area that UNDP will continue to support.
  • Anticipated problems—Evaluations can help prevent problems and provide an independent perspective on existing problems. When selecting an outcome for evaluation, look for those with problems or where complications are likely to arise because the outcome is within a sensitive area with a number of partners.
  • Need for lessons learned—What kinds of lessons are needed to help guide activities in this country or other countries or regions in the region?
  • Alignment and harmonization—Planned evaluations should be aligned with national, regional and global development priorities and UNDP corporate priorities (for example, the UNDP Strategic Plan), and should be harmonized with evaluations of UN system organizations and other international partners. This ensures that proposed evaluations will generate important information to help UNDP and its partners better manage for results in a changing context. Opportunities for joint evaluations with governments and partners should be actively pursued. Evaluations commissioned by UNDP should be useful for national partners. In determining the timing of an evaluation, UNDP should consider various decision-making points that exist in the partner government, such as budget decision-making, development framework or strategy setting, and existing review processes for development programmes and projects. For instance, if the government is undertaking an evaluation of a national development strategy or framework to which UNDP projects are contributing, the UNDP-managed evaluations should enhance complementarities and minimize duplicated efforts.

Once the outcome evaluations are selected, the programme unit identifies the projects that are designed to contribute to the outcome and indicates them as relevant projects for the evaluation plan. This gives notice to the concerned projects and allows them to take account of the outcome evaluation in their monitoring and work planning. It also helps the UNDP programme officers and relevant national partners in outcome monitoring prepare for the outcome evaluation.

The same criteria for selecting outcomes should be applied to selecting project evaluations. Some partnership protocols require their related projects to be evaluated. It is strongly recommended that evaluations should be completed for pilot projects before replication or up-scaling, projects that are going into a next phase, and projects ongoing for more than five years for accountability and learning purposes. As part of the regular updating process of the evaluation plan, any newly identified project evaluations should be included in the plan.

In crisis settings, extra time should be allocated to evaluations, as there is a need for flexibility in order to respond to changing situations. This means being flexible when scheduling field visits and interviews and anticipating delays in data collection and last-minute changes in data collection methods if relationships between different groups change. Further, more preparation is required when working with vulnerable groups and those affected by conflict, as greater care and ethical considerations are required.