3.5 Capacity for monitoring and evaluation

In UNDP assisted programmes, national programme partners are jointly responsible with UNDP for carrying out certain planned monitoring and evaluation activities. In line with the principles of MfDR, national ownership and use of country systems, monitoring and evaluation efforts in UNDP should capitalize, be aligned to and build on existing national monitoring and evaluation systems and capacities whenever feasible (see Box 21). When appropriate, monitoring and evaluation efforts of UNDP should indicate where the organization’s programmatic support requires further strengthening, including that of national systems. The analytical process and data used for planning provides initial opportunities and insights to discern future monitoring and evaluation requirements in comparison to existing data sources and quality. This also identifies areas where capacity to monitor and evaluate can be further developed in national partners at their request and when relevant.

Box 21. Examples of alignment with national systems

  • National budgeting process
  • National medium-term or long-term development strategic plan or framework
  • Sector strategy, policy, programme or projects and national coordination bodies tasked to coordinate such activities
  • National M&E systems for national development strategy, plan or framework and a sector strategy, policy, programme or projects
  • Existing review mechanisms (poverty reduction strategy reviews, New Partnership for Africa’s Development "NEPAD", peer-review, etc.)

At the higher levels of results (national goals, sector goals and outcomes), key stakeholders should typically form sector-wide or inter-agency groups around each major outcome or sector. Whenever there are existing national structures such as sector-wide coordination mechanisms, the United Nations and UNDP should ideally engage them and participate in these rather than setting up parallel systems. Sectoral or outcome-level coordinating mechanisms should not be a United Nations or UNDP management arrangement, but an existing national structure that is already charged with the coordination of the sector from a development perspective within the national context.  These groups should have adequate capacity to be responsible for the following:

  • Agree on an M&E framework for the outcomes and oversee their implementation. They ensure continuous outcome assessment and can enhance progress towards results.
  • Promote partnerships and coordination within a single shared outcome. All projects that are generating relevant outputs to the corresponding outcome should be included in the outcome group to ensure inclusive discussions. This gives partners a common vision of the outcome to which different projects or outputs are contributing.
  • Ensure synergy and coordination by reinforcing a common strategy among partners working towards common results.
  • Monitor and evaluate, where appropriate, the achievement of outcomes and their contribution to national development goals. Outcome-level mechanisms are expected to determine who is responsible for monitoring and data collection, how often it will be collected, who will receive it and in what form. The results frameworks and the M&E framework serve as the basis for joint monitoring and evaluation by these groups.
  • Carry out, participate in, and assure the overall quality of project, outcome, thematic and other types of reviews and evaluations and ensure that the processes and products meet international standards.
Ensure effective use and dissemination of monitoring and evaluation information in future planning and decision-making for improvements.

Capacities for monitoring and evaluation, like for most technical areas, exist on three levels: the enabling environment, the organizational level, and the individual level. Capacities at these levels are interdependent and influence each other through complex co-dependent relationships. Change in capacity generally occurs across four domains: institutional arrangements, including adequate resources and incentives; leadership; knowledge; and accountability mechanisms. Addressing only one of these levels or domains in a programme or project is unlikely to result in developing sustainable monitoring and evaluation capacities. Therefore, an outcome group needs to take a more holistic view in identifying and addressing the capacities needed to monitor and evaluate the results being pursued.

The relevant sector-wide or outcome-level coordinating mechanism may begin by undertaking a high-level or preliminary capacity assessment to understand the level of existing and required monitoring and evaluation capacities of a given entity.27 Benchmarks for the three levels and four domains mentioned above are limited. However, the sub-sections below offer possible lines of questioning for the preliminary assessment. The insights generated by these questions and others may help a programme team formulate a capacity development response.

Institutional arrangements
  • Is there a documented institutional or sector programme monitoring and evaluation policy that clarifies the mandates of monitoring and evaluation entities and programme or project teams, their responsibilities, and accountability measures for effective data collection and data management of public programmes or projects?
  • Does the institutional and sector policy mandate require: establishing standard tools and templates, aligning organizational data with the national data collection and management, defining standards for monitoring and evaluating skills, and ensuring proper training?
  • Are sufficient resources, including availability of skilled staff and financial resources, allocated for monitoring and evaluation activities in respective monitoring and evaluation entities? Do monitoring staff have proper statistical and analytical skills to compile and analyse sample and snapshot data?
  • Is there an independent evaluation entity? Is the institution responsible for evaluation truly ‘independent’ from management and subject to evaluation? What is the reporting line of those responsible for carrying out evaluations? What mechanisms are there to safeguard the independence of the evaluation function?
Leadership
  • Does high-level management support evidence-based decision-making throughout the organization?
Knowledge
  • Can high-quality information be disaggregated by relevant factors (such as gender, age and geography) to assess progress and analyse performance?
  • Do the respective monitoring and evaluation entities have access to all relevant programme or project information to be gathered? Do the stakeholders have access to data collected and analysed (for example through the Internet)?
  • Do the monitoring and evaluation entities have easy-to-understand formats for data collection and reporting? Is there a systematic and documented process of ensuring data quality control at all levels of collection, analysis and aggregation?
  • Is there sufficient evaluation technical expertise in the national system? Are there national professional evaluation associations?
Accountability

  • Can the information from the monitoring and evaluation entities be provided to decision makers and other relevant stakeholders in a timely manner to enable evidence-based decision-making?

Based on the above considerations and the insights generated from a high-level capacity assessment, one of four broad approaches would be selected to meet the monitoring and evaluation requirements of the results being pursued (see Figure 12). This high-level capacity assessment may also lead to more in-depth capacity assessments for particular areas.

It may be important for the sector-wide or outcome group to document the analysis from Figure 12 in a simple capacity development matrix (see Table 17). This matrix can help determine what monitoring and evaluation facilities exist in national partner institutions that can be used and identify gaps. The last column could be used to indicate how capacity development efforts—including detailed capacity assessments—may be addressed through other UNDP programmatic support, when relevant national demand and need arise.

Table 17. Monitoring and evaluation capacity matrix

Key Partner or Stakeholder of the Outcome Group Contributing to Result

Specific Component of Result or Outcome for Which the Partner is Directly Associated

Existing M&E Mechanisms and Capacities of Partner (institutional arrangements, leadership, knowledge, accountability)

Potential Areas for Developing M&E Capacities of Partner in Line with Its Mandate

Recommended Action for Developing M&E Capacities

Elections Authority

  • Organizing progress reviews, field visits
  • Collection and analysis data
  • Reporting

 

Limited to Headquarters level only

Field monitoring, especially skills at the regional level to assess inclusion of disadvantaged and those in remote locations.

Initial capacity development support should be focused on developing monitoring skills pertaining to achieving the outcome. Funds available within the outcome may also be used to carry out a capacity assessment for the Elections Authority.

National Office of Statistics

All surveys will be completed by National Office of Statistics

National Office of Statistics is a key national institute that is expected to provide high quality national surveys, analyses and reporting of findings.

Capacity development of National Office of Statistics is a national priority.

The Outcome Group should promote a national effort to develop capacity of National Office of Statistics for conducting, analysing and reporting on surveys.

Monitoring and Evaluation Division, Ministry of Planning

Government unit responsible for monitoring and evaluating major development projects and coordination of sector-level monitoring and evaluation (including the election project) at the national outcome level and to build the national capacity in monitoring and evaluation*

Monitoring and Evaluation Division is politically independent and is staffed with civil servants competent in monitoring and evaluation

Monitoring and Evaluation Division has never worked directly with staff members of the Election Authority or National Office of Statistics regarding monitoring and evaluationin this particular area. This is at high risk to be politicized.

Support the efforts of the Monitoring and Evaluation Division to train the Election Authority Electoral Commission staff and National Office of Statistics staff on the development of specific indicators, baselines and targets and data collection methods for the work of the Elections Authority. Support the efforts of the Monitoring and Evaluation Division to promote the culture of evaluation within the Elections Authority.

* Responsible units for monitoring and evaluation of independent institutional bodies such as Monitoring and Evaluation Division vary from country to country.