Annex 3. Evaluation terms of reference template and quality standards

The Terms of Reference (ToR) template is intended to help UNDP programme units create ToRs based on quality standards for evaluations consistent with the concepts and terms presented in this Handbook and the UNEG ‘Standards for Evaluation in the UN System’.62

The ToR should also explicitly state a requirement for the evaluation to assess the extent of UNDP commitment to the human development approach and how effectively equality and gender mainstreaming have been incorporated in the design and execution of the project or programme to be evaluated.

In terms of evaluation methodology, the ToR should retain enough flexibility for the evaluation team to determine the best methods and tools for collecting and analysing data. For example, the ToR might suggest using questionnaires, field visits and interviews, but the evaluation team should be able to revise the approach in consultation with key stakeholders, particularly the intended users and those affected by evaluation results (See Chapter 7 for more information on design issues).

The ToR should, at a minimum, cover the elements described below:

1. Background and context

The background section makes clear what is being evaluated and identifies the critical social, economic, political, geographic and demographic factors within which it operates that have a direct bearing on the evaluation. This description should be focused and concise (a maximum of 1 page) highlighting only those issues most pertinent to the evaluation. The key background and context descriptors that should be included are listed below:

  • Description of the intervention (outcome, programme, project, group of projects, themes, soft assistance) that is being evaluated.
  • The name of the intervention (e.g., project name), purpose and objectives, including when and how it was initiated, who it is intended to benefit and what outcomes or outputs it is intended to achieve, and the duration of the intervention and its implementation status within that time-frame.
  • The scale and complexity of the intervention, including, for example, the number of components, if more than one, and the size and description of the population each component is intended to serve, both directly and indirectly.
  • The geographic context and boundaries, such as the region, country, landscape and challenges where relevant.
  • Total resources required for the intervention from all sources, including human resources and budgets comprising UNDP, donor and other contributions.
  • Key partners involved in the intervention, including the implementing agencies and partners, other key stakeholders, and their interest concerns and the relevance for the evaluation.
  • Observed changes since the beginning of implementation and contributing factors.
  • How the subject fits into the partner government’s strategies and priorities; international, regional or country development goals; strategies and frameworks; UNDP corporate goals and priorities; and UNDP global, regional or country programmes, as appropriate.
  • Key features of the international, regional and national economy and economic policy that have relevance for the evaluation.
  • Description of how this evaluation fits within the context of other on-going and previous evaluations and the evaluation cycle.

More detailed background and context information (e.g., initial funding proposal, strategic plans, logic framework or theory of change, monitoring plans and indicators) should be included or referenced in annexes via links to the Internet or other means of communication.

2. Evaluation purpose

The purpose section of the ToR explains clearly why the evaluation is being conducted, who will use or act on the evaluation results, and how they will use or act on the results. The purpose should include some background and justification for why the evaluation is needed at this time and how the evaluation fits within the programme unit’s evaluation plan (see Chapter 3). A clear statement of purpose provides the foundation for a well designed evaluation.

3. Evaluation scope and objectives

This section defines the parameters and focus of the evaluation. The section answers the following questions:

  • What aspects of the intervention are to be covered by the evaluation? This can include the time-frame, implementation phase, geographic area, and target groups to be considered, and as applicable, which projects (outputs) are to be included.
  • What are the primary issues of concern to users that the evaluation needs to address or objectives the evaluation must achieve?  

Issues relate directly to the questions the evaluation must answer so that users will have the information they need for pending decisions or action. An issue may concern the relevance, efficiency, effectiveness, or sustainability of the intervention. In addition, UNDP evaluations must address how the intervention sought to strengthen the application of the rights-based approach and mainstream gender in development efforts.

4. Evaluation questions

Evaluation questions define the information that the evaluation will generate. This section proposes the questions that, when answered, will give intended users of the evaluation the information they seek in order to make decisions, take action or add to knowledge. For example, outcome evaluation questions might include:

  • Were stated outcomes or outputs achieved?
  • What progress toward the outcomes has been made?
  • What factors have contributed to achieving or not achieving intended outcomes?
  • To what extent have UNDP outputs and assistance contributed to outcomes?
  • Has the UNDP partnership strategy been appropriate and effective?
  • What factors contributed to effectiveness or ineffectiveness?

Evaluation questions must be agreed upon among users and other stakeholders and accepted or refined in consultation with the evaluation team.

5. Methodology

The ToR may suggest an overall approach and method for conducting the evaluation, as well as data sources and tools that will likely yield the most reliable and valid answers to the evaluation questions within the limits of resources. However, final decisions about the specific design and methods for the evaluation should emerge from consultations among the programme unit, the evaluators, and key stakeholders about what is appropriate and feasible to meet the evaluation purpose and objectives and answer the evaluation questions, given limitations of budget, time and extant data.

For example, the ToR might describe in an annex:

  • Whether and how the evaluation was considered in the intervention design.
  • Details of the results framework and M&E framework, including outcome and output indicators and targets to measure performance and status of implementation, strengths and weaknesses of original M&E design, and the quality of data generated.
  • Availability of relevant global, regional and national data.
  • Lists and descriptions of key stakeholders (evaluation users, partner donors, staff of executing or other relevant agencies, subject beneficiaries, etc.) and their accessibility.
6. Evaluation products (deliverables)

This section describes the key evaluation products the evaluation team will be accountable for producing.  At the minimum, these products should include:

  • Evaluation inception report—An inception report should be prepared by the evaluators before going into the full fledged evaluation exercise. It should detail the evaluators’ understanding of what is being evaluated and why, showing how each evaluation question will be answered by way of: proposed methods; proposed sources of data; and data collection procedures. The inception report should include a proposed schedule of tasks, activities and deliverables, designating a team member with the lead responsibility for each task or product. The inception report provides the programme unit and the evaluators with an opportunity to verify that they share the same understanding about the evaluation and clarify any misunderstanding at the outset.
  • Draft evaluation report—The programme unit and key stakeholders in the evaluation should review the draft evaluation report to ensure that the evaluation meets the required quality criteria (see Annex 7).
  • Final evaluation report.
  • Evaluation brief and other knowledge products or participation in knowledge sharing events, if relevant (see Chapter 8).
7. Evaluation team composition and required competencies

This section details the specific skills, competencies and characteristics needed in the evaluator or evaluation team specific to the evaluation and the expected structure and composition of the evaluation team, including roles and responsibilities of team members (see Table 23 on page 142 of the Handbook for more information).

The section also should specify the type of evidence (resumes, work samples, references) that will be expected to support claims of knowledge, skills and experience. The ToR should explicitly demand evaluators’ independence from any organizations that have been involved in designing, executing or advising any aspect of the intervention that is the subject of the evaluation.63  

8. Evaluation ethics

The ToR should include an explicit statement that evaluations in UNDP will be conducted in accordance with the principles outlined in the UNEG ‘Ethical Guidelines for Evaluation’64 and should describe critical issues evaluators must address in the design and implementation of the evaluation, including evaluation ethics and procedures to safeguard the rights and confidentiality of information providers, for example: measures to ensure compliance with legal codes governing areas such as provisions to collect and report data, particularly permissions needed to interview or obtain information about children and young people; provisions to store and maintain security of collected information; and protocols to ensure anonymity and confidentiality.

9. Implementation arrangements

This section describes the organization and management structure for the evaluation and defines the roles, key responsibilities and lines of authority of all parties involved in the evaluation process. Implementation arrangements are intended to clarify expectations, eliminate ambiguities, and facilitate an efficient and effective evaluation process.

The section should describe the specific roles and responsibilities of the evaluators, including those of the members of the team, the Task Manager, the management of the commissioning programme unit and key stakeholders. The composition and expected roles and responsibilities of the Advisory Panel members or other quality assurance entities and their working arrangements should also be made explicit. In case of a joint evaluation, the roles and responsibilities of participating agencies should be clarified. Issues to consider include: lines of authority; lines of and processes for approval; and logistical considerations, such as how office space, supplies, equipment, and materials will be provided; and processes and responsibility for approving deliverables.

10. Time-frame for the evaluation process

This section lists and describes all tasks and deliverables for which evaluators or the evaluation team will be responsible and accountable, as well as those involving the commissioning office, indicating for each the due date or time-frame (e.g., work plan, agreements, briefings, draft report, final report), as well as who is responsible for its completion. At a minimum, the time breakdown for the following activities should be included:

  • Desk review
  • Briefings of evaluators
  • Finalizing the evaluation design and methods and preparing the detailed inception report
  • In-country evaluation mission (visits to the field, interviews, questionnaires)
  • Preparing the draft report
  • Stakeholder meeting and review of the draft report (for quality assurance)
  • Incorporating comments and finalizing the evaluation report

In addition, the evaluators may be expected to support UNDP efforts in knowledge sharing and dissemination (see Chapter 8). Required formats for the inception reports, evaluation reports and other deliverables should be included in the annexes of the ToR for the evaluation being commissioned. This section should also state the number of working days to be given to each member of the evaluation team and the period during which they will be engaged in the evaluation process (e.g., 30 working days over a period of three months).

11. Cost

This section should indicate total dollar amount and other resources available for the evaluation (consultant fees, travel, subsistence allowance, etc.) This is not a detailed budget but should provide information sufficient for evaluators to propose an evaluation design that is feasible within the limits of available time and resources. If the available amount is not sufficient to ensure the high quality of evaluation products, discussions can take place between the evaluators and the commissioning unit early on in the process.

12. ToR annexes

Annexes can be used to provide additional detail about evaluation background and requirements to facilitate the work of evaluators. Some examples include:

  • Intervention Results Framework and Theory of Change—Provides more detailed information on the intervention being evaluated.
  • Key stakeholders and partners—A list of key stakeholders and other individuals who should be consulted, together with an indication of their affiliation and relevance for the evaluation and their contact information. This annex can also suggest sites to be visited. 
  • Documents to be consulted—A list of important documents and webpages that the evaluators should read at the outset of the evaluation and before finalizing the evaluation design and the inception report. This should be limited to the critical information that the evaluation team needs. Data sources and documents may include:
    • Relevant national strategy documents
    • Strategic and other planning documents (e.g., programme and project documents)
    • Monitoring plans and indicators
    • Partnership arrangements (e.g., agreements of cooperation with governments or partners)
    • Previous evaluations and assessments
    • UNDP evaluation policy, UNEG norms and standards, and other policy documents
    • Required format for the inception report
  • Evaluation matrix (suggested as a deliverable to be included in the inception report)—The evaluation matrix is a tool that evaluators create as map and reference in planning and conducting an evaluation. It also serves as a useful tool for summarizing and visually presenting the evaluation design and methodology for discussions with stakeholders. It details evaluation questions that the evaluation will answer, data sources, data collection, analysis tools or methods appropriate for each data source, and the standard or measure by which each question will be evaluated. (See Table A.)
Table A. Sample evaluation matrix

Relevant evaluation criteria

Key Questions

Specific Sub-Questions

Data Sources

Data collection Methods / Tools

Indicators/ Success Standard

Methods for Data Analysis

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  • Schedule of tasks, milestones and deliverables—Based on the time-frame present in the ToR, the evaluators present the detailed schedule.
  • Required format for the evaluation report—The final report must include, but not necessarily be limited to, the elements outlined in the quality criteria for evaluation reports (see Annex 7).
  • Code of conduct—UNDP programme units should request each member of the evaluation team to read carefully, understand and sign the ‘Code of Conduct for Evaluators in the UN System’, which may be made available as an attachment to the evaluation report.