4.3 Monitor: Collection of data, analysis and reporting

Scope of monitoring

Monitoring aims to identify progress towards results, precipitate decisions that would increase the likelihood of achieving results, enhance accountability and learning. All monitoring efforts should, at a minimum, address the following:

  • Progress towards outcomes—This entails periodically analysing the extent to which intended outcomes have actually been achieved or are being achieved.
  • Factors contributing to or impeding achievement of the outcomes—This necessitates monitoring the country context and the economic, sociological, political and other developments simultaneously taking place and is closely linked to risk management.
  • Individual partner contributions to the outcomes through outputs—These outputs may be generated by programmes, projects, policy advice, advocacy and other activities. Their monitoring and evaluation entails analysing whether or not outputs are in the process of being delivered as planned and whether or not the outputs are contributing to the outcome.
  • Partnership strategy—This requires the review of current partnership strategies and their functioning as well as formation of new partnerships as needed. This helps to ensure that partners who are concerned with an outcome have a common appreciation of problems and needs, and that they share a synchronized strategy.
  • Lessons being learned and creation of knowledge products for wider sharing.

Partners may add additional elements where needed for management or analysis, while keeping a realistic scope in view of available capacities. Monitoring usually provides raw data that requires further analysis and synthesis prior to reporting for decision-making. Using information gained through monitoring, programme managers must analyse and take action on the programme and project activities to ensure that the intended results—results that are in the agreed results and resources frameworks—are being achieved. Managers of programmes also monitor and document the contributions of soft development initiatives and strategic partnerships.

Prioritizing monitoring

In practice, it is necessary to prioritize monitoring. Two factors can help assign monitoring priority: criticality of a UNDP contribution to the attainment of the overall result; and the severity of risks it faces. As the criticality and severity of risks change, the corresponding priority attached monitoring of an initiative also changes.

Criticality of a UNDP project or an initiative is considered high when: it is connected with a tight time-bound high national priority; there is critical reliance on relevant UNDP comparative strengths, expertise and competencies for the achievement of planned results; or it involves a critical UNDP coordination role entrusted by government and other partners.

Risks are initially identified in the results frameworks with their potential impacts. However, during programme and project implementation, additional risks may arise from a changing operational environment (such as a crisis) that may have to be factored in when prioritizing monitoring.

Based on the two criteria of criticality and risks, as indicated in Figure 15, it is possible to determine four broad categories to assign priority in monitoring. It is also possible to identify which of the two aspects should be followed more closely.

Monitoring in crisis settings

Standard processes of planning, monitoring and evaluation that apply in ‘normal’ developmental contexts need to be modified in order to be sensitive to crisis situations. In crisis contexts, monitoring approaches and processes should include:

  • Reference in the M&E framework to conflict-sensitive measures that need to be considered in implementing monitoring actions. These actions should flow from the situation analysis that applies to a given programme or project.
  • Monitoring should continually feed back to the conflict analysis—and the big picture—in order to make sure understanding of the crisis is up -to-date. Monitoring should also inform any changes that may be required to results maps.
  • Crisis situations are normally very fluid. Therefore, monitoring actions should be sensitive to changing circumstances. For example, monitoring schedules and data gathering methods may require frequent review and changes.
  • Take additional measures to make monitoring processes inclusive of the most vulnerable groups. Interviews, field visits, documents consulted, and all information gathered should be triangulated as much as possible to prevent bias. Furthermore, officials should be consulted regularly to ensure their ownership of results as well as to maintain credibility and balance in monitoring.
  • Monitoring can help address intra-group disparities—particularly gender-related disparities—that can result from development initiatives. This applies particularly to vulnerable groups, such as internally displaced people, minorities and indigenous groups. Particular attention should be paid to disaggregating monitoring data by sex, age, location and so forth in order to ensure programming initiatives meet the well being of marginalized people, especially women, youth and the elderly.
  • Capacity development for monitoring should be pursued even in crisis situations. However, it is necessary to execute monitoring, even if desired capacity development efforts fall behind the planned targets.
  • If direct monitoring of projects in crisis situations is difficult or impossible, capacity development of local partners and civil society organizations for monitoring should be given serious consideration. Where project staff cannot conduct regular field visits, monitoring should still be done using secondary information from credible informants. However, use of such methods should be clearly stated in reporting data, without necessarily disclosing informants’ identities as that may place them at risk.
  • Monitoring should also factor in security risks and build adequate safeguards and resources to manage such risks.
Selecting the monitoring approach and tools

There is a range of approaches and tools that may be applied to monitoring projects, programmes, outcomes and any other programmatic activity. Those who manage programmes and projects must determine the correct mix of monitoring tools and approaches for each project, programme or outcome, ensuring that the monitoring contains an appropriate balance between:

  • Data and analysis—This entails obtaining and analysing documentation from projects that provides information on progress.
  • Validation—This entails checking or verifying whether or not the reported progress is accurate.
  • Participation—This entails obtaining feedback from partners and beneficiaries on progress and proposed actions.

Table 19 lists a variety of common monitoring tools and mechanisms, divided into three categories according to their predominant characteristic.

Table 19. Selecting the right mix of monitoring mechanisms


Data and Analysis



  • M&E framework
  • AWPs
  • Progress and quarterly reports on achievement of outputs
  • Annual Project Report
  • Project delivery reports and combined delivery reports
  • Substantive or technical documents: MDG Reports, National Human Development Reports, Human Development Reports
  • Progress towards achieving outcomes and Standard Progress Reports on outcomes
  • Field visits
  • Spot-checks
  • Reviews and assessments by other partners
  • Client surveys
  • Evaluations
  • Reviews and studies


  • Sectoral and outcome groups and mechanisms
  • Steering committees and mechanisms
  • Stakeholder meetings
  • Focus group meetings
  • Annual review


<-     Learning takes place through all monitoring tools and mechanisms     ->

It is not realistic to expect that any one monitoring tool or mechanism will satisfy all needs. Different stakeholders may use different tools or may use the same tools differently. For partners who are actively involved in managing for results, monitoring data and gathering information begins at the project level. The most common tools and events used for systematic monitoring, data gathering and reporting applicable to projects used by partners are AWPs, field visits and Annual Project Reports (APRs).  Monitoring of outcomes typically requires a different mix of tools than those traditionally used at the project level. Instruments such as project visits or bilateral meetings may be insufficient because the scope of a given project is too narrow or the range of partners involved is too limited. Instead, more useful tools may include reviews by outcome groups, analyses and surveys. (Further information on such tools available is in Chapters 5 through 8.)

Annual work plans (AWPs)

AWPs detail the activities to be carried out by a programme or project—including who is responsible for what, time-frames, planned inputs and funding sources—in order to generate outputs in relation to the outcome.  AWPs also serve as good references for monitoring progress later in the year. Therefore AWPs and their accompanying monitoring tools are among the most important tools in monitoring, especially for programmes and projects that are normally multi-year and multi-partner efforts. In order to plan, manage and monitor a programme for a given period (typically a calendar year), most partners—including UNDP—use AWPs. There are numerous formats and ways to prepare AWPs. Usually AWPs are produced at the beginning of the year as a planning tool, and their monitoring versions are prepared later in the year separately. One possible AWP format, which has the advantage of combining both annual planning and reporting elements, is given in Table 20. All information except the last two columns should be given at the beginning of the year. The last two columns should be completed at the end of the year.

Table 20. Example of an Annual Work Plan format with monitoring component


Expected Outputs

Planned Activities


Responsible Party


Monitoring framework





Funding source

Budget description



Progress towards outputs

Output 1













Status of progress to target contribution to country programme outcome































Output 2











































Output 3
























































  1. The above is only illustrative. It may be adapted for practical use as appropriate.
  2. The format is based on the UNDG AWP format and its related monitoring tool (currently used as two separate formats).
  3. Outputs in column 1 should also give baselines, associated indicators and annual targets as applicable
  4. All activities including monitoring and evaluation activities to be undertaken during the year towards the stated outputs must be included in the Activities column
  5. Actual expenditures against activities completed should be given in the Expenditures column.
  6. The last column should be completed using data on annual indicator targets to state progress towards achieving the outputs. Where relevant, comment on factors that facilitated or constrained achievement of results including: whether risks and assumptions as identified in the country programme M&E framework materialized or whether new risks emerged; and internal factors such as timing of inputs and activities, quality of products and services, coordination and other management issues.

The project manager who is responsible for delivering the outputs should prepare the AWP. Depending on the complexity and nature of the results being pursued, the AWP preparation could be a collective effort. The institution managing the project ensures the interface between the desired results and the expectations of the target beneficiaries, thus promoting a sense of ownership among all partners. Project management should also contribute to developing the required partnerships among the partners through the AWP preparation process.

AWPs have multiple uses in monitoring:

  • To understand the contributions and targets set and agreed by the partners for the year to achieve a planned result in a transparent way
  • To review ongoing progress against the plan and identify bottlenecks
  • To use as a basis for reporting at the end of the year (annual report) and planning future work
Field visits

Field visits are essential for any field-based project. Field visits should be planned well in order to be of maximum use. The following considerations may help plan an effective field visit.

  • What is the purpose of the visit in terms of monitoring?—Field visits serve the purpose of validation. They validate the results reported by programmes and projects. They are of particular importance to large, key programmes and projects that are essential for outcomes. They involve an assessment of progress, results and problems and may also include visits to the project management or directorate.
  • Timing—A field visit may take place at any time of the year. If undertaken in the first half of the year, just after the annual review, it may be oriented towards the validation of results. If undertaken in the latter part of the year, the field visit should provide the latest information on progress towards annual and outcome review processes. The reports of field visits should be action-oriented and brief, submitted within a week of return to the office to the members of the respective Project Board, Programme Board and the Outcome Group for consideration and appropriated action if required.
  • Who should participate and be involved?—Visits are increasingly joint monitoring efforts of several partners working on a cluster of programmes and projects targeting an outcome or result. Joint visits also support ownership of the results. A team of staff from one or more partners may make visits to projects that are contributing to one particular outcome or in a specific geographical area addressing a specific development condition, for example displaced persons, post-natural disaster or a vulnerable community. Such joint efforts are often an efficient way to obtain a comprehensive overview of progress. In planning such visits, it is important to focus on what specific issues are to be addressed and to ensure that relevant national partners and beneficiaries would be available, involved and participate as required.
  • Dialogue and consultations—The emphasis should be on observing and ascertaining credible information on progress being made towards the attainment of results—outputs and outcomes—as well as their quality and sustainability. Those undertaking the field visit should discern other initiatives, for example soft assistance or gaps in strategy, that may need to be addressed. Field visits should not be used for lengthy discussions on detailed implementation issues. Such issues, if raised during field visits, may be noted for discussion with relevant partners who can resolve them.
  • Findings of field visits—These should be forwarded to appropriate partners and stakeholders for effective action. A format for field visit reports is given in Annex 2.
Box 22. UNDP policy on field visits and good implementation practice 

A representative from the UNDP country office must visit each programme and project contributing to results in the CPD and CPAP at least once a year. Field visits may be undertaken by the Programme Manager, Policy Adviser or a team from the country office (particularly when dealing with a complex outcome). The Resident Representative and other country office management staff are also encouraged to undertake field visits.

Annual Project Report (APR)

The APR is a self-assessment by the project management that serves as the basis for assessing the performance of programmes and projects in terms of their contributions to intended outcomes through outputs. The APR should provide an accurate update on project results, identify major constraints and propose future directions. As a self-assessment report by project management to the country office, it can be used to spur dialogue with partners.

Content, format and preparation of the APR

The basic APR should reflect the assessment of the AWP, discussed earlier. The APR is a report from the project to other stakeholders through the board or steering committee. APRs should be objective and may reflect views not agreed to by all stakeholders. The APR should be brief and contain the basic minimum elements required for the assessment of results, major problems and proposed actions. These elements include:

  • An analysis of project performance over the reporting period, including outputs produced and, where possible, information on the status of the outcome
  • Constraints in progress towards results, that is, issues, risks and reasons behind the constraints
  • Lessons learned and indications of how these will be incorporated
  • Clear recommendations for the future approach to addressing the main challenges

Beyond the minimum content, additional elements may be added as required by the project management or other partners. In the spirit of the principles of harmonization and simplification, the partners should agree on harmonized reporting formats (to the extent possible) to eliminate multiple reports and minimize work. From a monitoring perspective, it is critical for the APR to flow from the AWP and for it to serve the objectives of the overall M&E framework and hence the achievement of the planned results.

The project management is responsible for preparing and circulating the APR. The APR is prepared by project staff with specific attention to outputs and is considered by donors, other partners and stakeholders. Since project staff members are often experts in their fields, monitoring at the project level may also entail some expert assessment of the status of progress towards the achievement of the outcome.

The person responsible for project assurance (see Box 23) should review and make observations on the validity, reliability and quality of monitoring data collected and compiled by the project.

Box 23. Assurance role
UNDP has introduced the concept of programme and project assurance, which, inter alia, enhances the quality of monitoring. Managers of projects and programmes have the primary responsibility for ensuring that the monitoring data is accurate and of high quality. The assurance role is additional and is part of the responsibility of the programme and project board, as referred to in Box 20 in Chapter 3. It is normally delegated to a UNDP staff member who is not directly involved in the management of the project or programme. Typically, the programme assurance role is assigned to the M&E Focal Point in the office, and the project assurance role is assigned to a Programme Officer. The assurance function is operational during all stages of formulation, implementation and closure of projects and programmes. With regard to monitoring, the assurance role plays the following functions:

  • Adherence to monitoring and reporting requirements and standards
  • Ensure that project results elements are clear and captured in management information systems to facilitate monitoring and reporting
  • Ensure that high-quality periodic progress reports are prepared and submitted
  • Perform oversight activities, such as periodic monitoring visits and ‘spot checks’
  • Ensure that decisions of the project and programme board and steering committee are followed and changes are managed in line with the required procedures

Use of the APR
The APR is part of oversight and monitoring of projects and a key building block of the annual review. Normally, it also feeds into the annual reporting by donor partners on the results that they support. Once the APR has been prepared and distributed, the next step is to hold consultations, which may take place at the project board or steering committee, or through written observations from partners. Depending on its content and approach, the APR can be used for the following:
  • Performance assessment—When using mechanisms such as outcome boards, groups or steering committees to review project performance, the APR may provide a basis for consensus-building and joint decision-making on recommendations for future courses of action. Key elements of the APR are fed into higher levels of reviews, for example the UNDAF annual review, sectoral reviews and reviews of national development results and plans. The APR should be used as a basis for feedback on project performance.
  • Learning—The APR should provide information on what went right or what went wrong, and the factors contributing to success or failure. This should feed into the annual review, learning and practitioners networks, repositories of knowledge and evaluations. It is recommended that the APR of the final year of the project include specific sections on lessons learned and planning for sustainability (exit strategy). APRs may address the main lessons learned in terms of best and worst practices, the likelihood of success, and recommendations for follow-up actions where necessary. APRs may also be used to share results and problems with beneficiaries, partners and stakeholders and to solicit their feedback.
  • Decision-making—The partners may use the APR for planning future actions and implementation strategies, tracking progress in achieving outputs, approaching ‘soft assistance’, and developing partnerships and alliances. The APR allows the project board, steering committee and partners to seek solutions to the major constraints to achievement of the planned results. As a result of this consultative process, necessary modifications could be made to the overall project design and to the corresponding overall results frameworks in the planning documents.
Joint monitoring

Monitoring of development results cannot be carried out in isolation or on an ad hoc basis. Whenever possible, monitoring should be carried out as joint or collaborative efforts among key stakeholders. Primary stakeholders—including multiple UN organizations working towards a given results as well as representatives of identified beneficiary groups and key national partners—should be involved to the extent possible. Such joint monitoring should also manifest in joint field visits. Ideally, joint monitoring should be organized and coordinated through the national outcome groups or sector-wide mechanisms. Joint monitoring should lead to joint analysis and precipitating decisions, for example to agree formally at annual reviews.

Where national institution-led joint monitoring is constrained, the UNCT could form inter-agency groups around each UNDAF outcome. These groups would use the results matrix and M&E framework as the basis for joint monitoring with relevant programme partners. Results of such monitoring should be used to report to the UNCT about progress and for joint analysis. These UNDAF outcome groups should augment any monitoring information that could be generated by UN organizations and partners separately.

In practical terms, joint monitoring would involve the following:

  • Meeting regularly with partners to assess progress towards results already stated in the M&E framework and sharing information gathered by one or more partners
  • Planning and conducting joint field monitoring missions to gauge achievements and constraints
  • Identifying lessons or good practices, sharing them, promoting their use by partners and developing knowledge products
  • Identifying capacity development needs among partners, particularly related to data collection, analysis, monitoring and reporting
  • Reporting regularly to the respective stakeholders and steering committee or board
  • Bringing lessons and good practices to the attention of policy makers
  • Contributing to common annual progress reports for consideration at outcome level reviews and annual reviews
Tip: Start thinking about monitoring data and capacities needed for monitoring early in the programme planning process. It may be too late to think about them during implementation stages.
Obtaining reliable data and information for monitoring

Monitoring is part of a comprehensive programming continuum that starts with an in-depth analysis of the development situation. Normally, this analytical phase that precedes planning provides early insights into monitoring considerations. For example, the availability and quality of data that is needed for analysis for developing a new programme or project would indicate the scope and possibilities for use of existing capacities and resources for monitoring. It would also indicate critical gaps that may need to be addressed in order to ensure effective monitoring in the future. Therefore, recognizing that there is an important opportunity during the analytical phase preceding planning can ensure effective monitoring later in the programme cycle.

Ideally, monitoring data should originate or be collected from national sources. However, this depends on the availability and quality of data from those sources. In an increasing number of countries, analytical data does come from national development information systems, which are also the repositories of important monitoring data and information. External partners should identify and build on what data and systems already exist in the country. Specific attention should be given to establish baselines, identify trends and data gaps, and highlight constraints in country statistical and monitoring systems. Many UNDP country offices have assisted in setting up data collection systems. Some examples are given in Box 24.

Box 24. Good practices of data collection supported by UNDP

  1. UNDP Pakistan has successfully supported a data collection system called the Participatory Information System under one of its institutional and capacity development projects in Balochistan Province. The system has two prominent features: the community collects household and services information through Community Information Committees, which are composed of community members; and the system provides the communities with a graphical look at their social and economic status, facilitates the planners and service providers in filling the service gaps, and makes the existing services better. The type of information collected facilitates monitoring progress towards the achievement of MDGs.
  2. The first ‘Atlas of Human Development in Brazil’, launched in 1998, pioneered calculation of the human development index at the municipal level. For the first time, the human development index and its components were calculated for all the municipalities of a country (Brazil had 4,491 municipalities at the time). In 2003, a new edition of the Atlas (available only in Portuguese) was released, using data from the 2000 Demographic Census. This can be downloaded from http://www.pnud.org.br/atlas/ by clicking on the link “Clique aquipara instalar o Atlas de Desenvolvimento Humano no Brasilemse u computador.” (Translation: “Click here to install the Atlas of Human Development in Brazil on your computer.”)
  3. The Atlas allows a multi-dimensional approach to human development measurement, since it provides a host of indicators on access to basic services, educational attainment, social vulnerability, and other themes. Special geo-referenced software was developed to allow for easy manipulation of the database, which in the current version comprises 200+ indicators for the 5,500+ Brazilian municipalities. The software has features to perform elaborate queries, create thematic maps, and generate fact sheets, and some simple statistical functions (such as creation of histograms, correlation plots and descriptive statistics). The software played a key role in the Atlas' success, allowing non-statistically trained people to make their own analyses.

In addition, UNDG can provide support related to DevInfo31, which is a database system for monitoring human development.  It is a tool for organizing, storing and presenting data in a uniform way to facilitate

data sharing at the country level across government departments, UN organizations and development partners.  In 2004, the UNDG endorsed the use of DevInfo to assist countries in monitoring achievement of the MDGs. At present, more than 100 countries use DevInfo as a platform to develop a national socio-economic database. More than 80 national statistics organizations and other agencies have officially launched and adapted the DevInfo database with their user-specified requirements. The software is available royalty-free and there is a DevInfo Support Group providing technical assistance to the countries and supporting national capacity development efforts.

Arrangements and formats for reporting results should be agreed upon in advance in order to meet the needs of partners. Where possible, a common monitoring format should be adopted by all partners in order to minimize the workload, especially for national partners, and to meet the commitments of simplification and harmonization agreed upon in international forums.

UN organizations have developed several harmonized reporting formats. They include:

  1. A format for AWPs with a monitoring framework, which could be used to report at project level (discussed in Table 20)
  2. Several UN organizations use the Standard Progress Report format for progress and donor reporting, which shows how resources were used and the results that were achieved. This could be used at the outcome level. It is linked to the other standard formats used by UN organizations such as the AWP, CPAP, CPD and UNDAF results matrix.

The above form a good basis for adopting common reporting formats. They can also be adapted by partners to meet specific requirements.