6.2 Key steps in decentralized evaluations

The process for decentralized evaluations, commissioned by programme units, include the following key steps (see Figure 16 and the checklist on page 150).

Box 33. Steps in decentralized evaluations

Step 1: Pre-evaluation: Initiating the evaluation process

  • Checking the ‘evaluability,’ or readiness, for evaluation

Tools: Evaluation plan template (Chapter 3)

Step 2: Preparation 

  • Agreeing on the management structure of an evaluation and roles and responsibilities
  • Drafting the ToR
  • Organizing the relevant documentation
  • Selecting the evaluation team

Tools: Template and quality criteria for ToR (Annex 3), selection criteria for evaluators (Annex 5)

Step 3: Managing the conduct of evaluation (while external evaluators conduct evaluation)

  • Briefing and supporting the evaluation team
  • Reviewing the inception report prepared by the evaluation team
  • Reviewing the draft evaluation report

Tools: Template and quality criteria for evaluation reports (Annex 3)

Step 4: Using the evaluation: management response, knowledge sharing and dissemination

  • Preparing the management response and implement follow-up actions
  • Preparing and disseminating evaluation products and organizing knowledge sharing events
  • Reviewing evaluations prior to new planning processes
Tools: Management response template (Annex 6), practical steps for developing knowledge products and dissemination

Step 1: Pre-evaluation: Initiating the evaluation process

Checking the evaluability, or readiness, for evaluation

Before formally initiating an evaluation process, UNDP programme units and stakeholders who were involved in the development of an evaluation plan (see Chapter 3) should assess whether the subject of evaluation is ready for evaluation. This entails determining whether the proposed evaluation is: still relevant and feasible as planned, designed to be complementary to the previous analysis, and likely to add value to existing information and other planned and future evaluations by government and other partners.
Further, UNDP programme units and stakeholders should review the results matrix, which forms the basis of evaluations. Since the model was completed at the planning stage (see Chapter 2), there may have been changes in the development context or partnership strategy during implementation. Therefore, before the evaluation is formally commissioned, programme units and key partners and stakeholders may revise and update the model and add emerging information that reflects the changes that have occurred over a period of the initiative. The results map should be updated throughout the life of the programme as it helps evaluators and others understand the outcome, changes that have occurred and the factors that are understood to contribute to outcomes.

The checklist below is intended to help UNDP programme units and stakeholders determine the degree of readiness for evaluation.




Does the subject of evaluation have a clearly defined results map ? Is there common understanding as to what initiatives will be subject to evaluation?



Is there a well defined results framework for initiative(s) that are subject to evaluation? Are goals, outcome statements, outputs, inputs and activities clearly defined? Are indicators SMART?



Is there sufficient capacity for the initiative(s) to provide required data for evaluation? For example, is there baseline data? Is there sufficient data collected from monitoring against a set of targets? Are there well-documented progress reports, field visit reports, reviews and previous evaluations?



Is the planned evaluation still relevant, given the evolving context? In other words, is there still a demand for evaluation? Is the purpose of the evaluation clearly defined and commonly shared amongst stakeholders?



Will political, social and economic factors allow for an effective conduct and use of evaluation as envisaged?



Are there sufficient resources (human and financial) allocated to evaluation?



If political and socio-economic situations do not allow the team to carry out an evaluation in a meaningful manner, UNDP management, together with national stakeholders, may decide to wait until an environment that is conducive to evaluation is secured. In conflict settings, such decision should be made based on good and current analyses of the setting so that the evaluation will be relevant to fast changing crisis situations. Factors such as security situations (safety of evaluators, UNDP staff involved and interviewees) and potential impact of the evaluation on existing tensions should be carefully assessed.

If the results map or the results framework need improvements, UNDP may organize a session with relevant stakeholders to enhance them by reviewing and clearly articulating the intended outcomes, outputs and indicators and also initiate a quick exercise to gather primary data through surveys and a desk review. This also presents an opportunity to establish baselines, which may not have been made available at the time of planning.
If a decision to carry out an evaluation is taken, all parties concerned should be informed of the decision to ensure buy-in, credibility and transparency of the evaluation. In conflict settings, getting the correct officials involved, visited and acknowledged at the outset of the evaluation process is critical to ensure ownership of the future process.

Step 2: Preparation
Agreeing on the management structure of an evaluation and roles and responsibilities

There should be a clearly defined organization and management structure for an evaluation and established roles and responsibilities for key players. Table 26 outlines key roles and responsibilities of the commissioner of the evaluation (UNDP), partners, evaluators and stakeholders in the evaluation process and Figure 17 shows the management structure.

Table 26. Key roles and responsibilities in the evaluation process

Person or Organization

Roles and Responsibilities

Commissioner of the Evaluation (UNDP)

  • Determine which outcomes will be evaluated when and to what level
  • Provide clear advice to the evaluation manager at the onset on how the findings will be used
  • Respond to the evaluation by preparing a management response and use the findings as appropriate
  • Take responsibility for learning across evaluations on various content areas and about evaluation
  • Safeguard the independence of the exercise
  • Allocate adequate funding and human resources

Co-Commissioner of the Evaluation (In the case of joint evaluations, governments, other UN organizations, development partners, etc.)

Same as Commissioner

Evaluation Manager appointed by the commissioner and partners; often a UNDP Programme Officer or an M&E specialist, when available

  • Lead the development of the evaluation ToR
  • Manage the selection and recruitment of the external evaluators
  • Manage the contractual arrangements, the budget and the personnel involved in the evaluation
  • Provide executive and coordination support to the reference group
  • Provide the evaluators with administrative support and required data
  • Liaise with and respond to the Commissioners and Co-commissioners
  • Connect the evaluation team with the wider programme unit, senior management and key evaluation stakeholders, and ensure a fully inclusive and transparent approach to the evaluation
  • Review the inception report and the draft evaluation report(s); ensure the final draft meets quality standards

Representatives of the Stakeholders, including beneficiaries who make up the Reference Group

  • Define or confirm the profile, competencies and roles and responsibilities of the evaluation manager and co-task manager; if applicable, particularly in a joint evaluation, for the evaluation and review and, clear candidates submitted for this role
  • Participate in the drafting and review of the draft ToR
  • Assist in collecting required data
  • Oversee progress and conduct of the evaluation
  • Review the draft evaluation report and ensure final draft meets quality standards

Evaluation Team (Consultants)

  • Fulfill the contractual arrangements in line with the UNEG norms and standards and ethical guidelines; this includes developing an evaluation matrix as part of the inception report, drafting reports and briefing the Commissioner and stakeholders on the progress and key findings and recommendations, as needed

Quality Assurance Panel Members, external to the evaluation exercise and can be M&E advisers in the regional centres, bureaux or national evaluation experts (see Annex 4 for the list of national evaluation associations)

  • Review documents as required and provide advice on the quality of the evaluation and options for improvement, albeit for another evaluation
  • Be a critical friend

UNDP and evaluation stakeholders should appoint an evaluation manager, who will assume the day-to-day responsibility for managing the evaluation and serve as a central person connecting other key players. Whenever available, an evaluation or M&E specialist in the programme unit should assume this role to enhance the independence of the exercise from those directly responsible for the subject of an evaluation. To ensure the substantive linkage between the programme or project being evaluated and the evaluation exercise, the designated manager should work closely with a relevant programme or project staff. In the absence of a specialist, a UNDP Programme Officer may assume this role.

National ownership means that key partners and stakeholders must play an integral part in the evaluation process from the outset. For every evaluation, there should be a reference group comprised of key stakeholders to work closely with the evaluation manager to guide the process. In most UNDP managed programmes and projects, there is already an existing mechanism and structure to ensure an adequate level of engagement and ownership by national stakeholders and partners. If such an entity—for example, a steering group, programme, outcome or project board or thematic group—already exists, members of such boards and additional key stakeholders for a particular evaluation can constitute the group of evaluation stakeholders, that is, the reference group. As long as an existing structure allows for an adequate level of stakeholder participation throughout the evaluation process, there is no need to create a new structure. If such structure does not exist, a mapping exercise should be carried out to identify key stakeholders for a particular evaluation. In crisis settings, a formal functional structure is unlikely to exist. When creating one in such circumstances, it is important to ensure representation is balanced, so that one particular group of people will not be dominant in the structure, which can heighten existing tensions amongst different groups of people or individuals.

For each evaluation, there should also be a mechanism for assuring the quality of the process and outputs of evaluation, such as ToRs and evaluation reports. Senior managers of UNDP programme units are ultimately responsible and accountable for the quality of the evaluation process and products. Relevant expertise may be drawn from evaluation advisers in UNDP regional centres and within the UN system in the country or neighboring countries, and in regional and national evaluation associations and research institutions (see Annex 4 for a list of these).

Drafting the Terms of Reference (ToR)

The ToR defines the scope, requirements and expectations of the evaluation and serves as a guide and point of reference throughout the evaluation. While the initial draft of the ToR is usually the responsibility of the commissioning office, an evaluation ToR should be developed in consultation with key stakeholders and evaluation partners to ensure that their key concerns are addressed and that the essential audience for the evaluation will view the evaluation results as valid and useful. Regional evaluation advisers and others with necessary expertise may comment on the draft ToR to ensure it meets the corporate quality standards.
A quality ToR should be explicit and focused and provide a clear mandate for the evaluation team about what is being evaluated and why, who should be involved in the evaluation process, and expected outputs. Each ToR should be unique to the particular circumstances and the purposes of the evaluation. Since the ToR plays a critical role in establishing the quality criteria and use of the evaluation report, adequate time should be allocated to this exercise. Further guidance is available in Chapter 7 and a template is provided in Annex 3.The outcome, project, thematic area or any other initiatives selected for evaluation along with the timing, purpose, duration and scope of the evaluation will dictate much of the substance of the ToR. However, because an evaluation cannot address all issues, developing the ToR involves strategic choices about the specific focus, parameters and outputs for the evaluation within available resources.

Organizing the relevant documentation

Once the scope of an evaluation has been defined, the evaluation manager, with help from the key stakeholders, starts to gather basic documentation that will be provided to the evaluation team. Preliminary deskwork may be carried out to gather information on activities and outputs of partners, previous UNDP-related assistance, and the current situation of the project, programme or outcome. Table 27 presents different sources of information that may be useful for an evaluation team.

Table 27. Sources of information for an evaluation team

Sources of Information

Description of Information

Country, regional and global programme results frameworks

These address the key outcomes that UNDP plans to achieve in a three- to five-year period. CPDs also provide background information and the UNDP perspective on development in a given country.  

Monitoring (regular reporting, reviews) and evaluation reports

These include evaluation reports on related subjects commissioned by the UNDP Evaluation Office, programme units, government, or other development partners and stakeholders, quarterly progress reports, CPAP annual reports, field visit reports, and other outcome and key programme or project documentation. The ERC can be used to search for relevant evaluations carried out by other UNDP units on similar topics.

Reports on progress of partners’ initiatives

Progress made by partners towards achieving the same outcome and information about how they have strategized their partnership with UNDP may be found in these reports.

Data from official sources

Information on progress towards outcome achievement may be obtained from sources in the government, private sector, academia and national research and regional and international institutes, including those in the UN system. In many cases, nationally adopted DevInfo and the websites of national statistical authorities are good sources for national statistics.

Research papers

Topics related to the outcome being evaluated may have been addressed in research papers from the government, NGOs, international financial institutions and academia.

National, regional and global reports

Useful data can be found in various reports such as the National Human Development Report, national MDG report, and other reports published by national, regional, and sub-regional organizations, international financial institutions and UN organizations.

Financial and management information (Atlas, balanced-score card, audit, ERBM platform, etc…)

A number of corporate tools provide financial and other management information that is relevant to evaluation. They include delivery, resource mobilization, and human resource management.

Additional sources at the country level

Reports of related regional and sub-regional projects and programmes

These reports indicate the extent to which these projects and programmes have complemented contributions by UNDP and its partners to progress towards the outcome.

Country office CPAP and Results Oriented Annual Report  

The CPAP and Results Oriented Annual Report should, ideally, identify all of the projects, programmes, sub-programmes and soft assistance that contribute to each outcome. Also included is information on key outputs, the strategic partners, partnership strategy, how much progress has been reported in previous years, the quality of outcome indicators, the need for further work and baseline information.

UNDAF assessment reviews and CPAP annual reviews 

These documents include baseline information on the country development situation, partnerships and joint activities of UNDP and other UN organizations.

Selecting the evaluators
The choice of the evaluators is important to the quality of evaluations. As discussed in Section 6.1, UNDP and evaluation stakeholders should, to the extent possible, engage independent evaluation institutions within the existing national monitoring and evaluation system, including national non-governmental institutions or evaluators to carry out the evaluation. A mapping of key players in the national evaluation system and an assessment of their capacity should be done prior to comissioning the work. This way necessary arrangements, such as working with experienced international evalutors or institutions and incorporating capacity development training as part of the exercise, can be made to address the capacity gaps while making sure that the end product will meet the agreed quality criteria.

UNDP selects evaluators through a comptetitve and transparent process in accordance with the organization’s rules and regulations for procurement. Areas of expertise to be considered in the team composition include the following:

  • Proven expertise and experience in conducting evaluations
  • Technical knowledge and experience in UNDP thematic areas, with specifics depending on the focus of the evaluation, and cross-cutting issues such as gender, rights-based approach, and capacity development
  • Knowledge of the national situation and context
  • RBM expertise
  • Familiarity with policy-making processes (design, adoption and implementation) if the evaluation is to touch upon policy advice and policy dialogue issues

External evaluation institutions, firms or individual evaluators may be national or international, or a combination of both. Annex 5 provides a comparison of advantages and disadvantages of hiring firms versus individuals as evaluators. It is advisable to have a team comprised of at least two evaluators. This will allow for the team members to compare notes, verify the accuracy of information collected and recorded, divide efforts to interview more people, and bounce ideas off of each other. In addition, evaluation teams should be balanced, to the extent possible, in their gender and geographical composition.

Tip: The Evaluation Office offers a roster of vetted evaluation experts on its intranet site (intra.undp.org/eo).

In addition to the competency of the evaluators and geographical and gender balance of the team, considerations should be made to safeguard the independence of the evaluation exercise. Independence comprises impartiality and being free from conflict of interest. Potential conflict of interest can be addressed at the time of selecting the evaluation team members, and impartiality can be ensured throughout the design, analysis and implementation of the evaluation. Conflict of interest in the selection of evaluators could be defined as a situation whereby because of a person’s work history or possibilities for future contracts, the consultant will not be able to provide objective and impartial analysis of the evaluation subject (see Box 34).

Box 34.  Avoiding and mitigating conflict of interest in evaluation (examples)

Case A: Conflict of interest due to past engagement

As a general rule, UNDP commissioning units will not assign consultants to the evaluation of projects, country programmes, sectors and themes, strategies, corporate processes or policies for which they have had prior involvement in design, implementation, decision-making or financing. Following this principle, UNDP staff members—including advisers based in regional centres and Headquarters-units—civil servants or employees of non-government organizations that may be or have been directly or indirectly related to the programme or project should not take part in the evaluation team. If a former staff member is being considered, special screening of past involvement with the project(s) to be evaluated should be reviewed.

Case B: Conflict of interest to due potential future involvement

The programme units must ensure that the evaluators will not be rendering any service (related or unrelated to the subject of the evaluation) to the implementation agency of the project or programme to be evaluated in the immediate future. Preferably, there should be a ‘cooling off’ period of at least one year before the evaluator is engaged in the implementation of a programme or project that was the subject of the evaluation. For example, an evaluator of the UNDP electoral support project should refrain from working for the national electoral commission as a technical adviser for at least one year.

Case C: Conflict of interest due to involvement in multiple assignments

If a consultant applies for two related assignments, ask the consultant to rank her or his choice. UNDP programme units should consider whether conducting two assignments could create a conflict of interest and take necessary action to mitigate.
On the part of the evaluator, he or she must inform UNDP and stakeholders of any potential or actual conflict of interest. The evaluation report should address any potential or actual conflict of interest and indicate measures put in place to mitigate its negative consequences. If conflict of interest is uncovered or arises during the evaluation, the organization should determine whether the evaluator should be dismissed and/or the evaluation terminated.
Drawn from various sources including:  UNEG, ‘Norms for Evaluation in the UN System’, 2005,available at: http://www.unevaluation.org/unegnorms; UNEG, ‘Standards for Evaluation in the UN System’, 2005, available at: http://www.unevaluation.org/unegstandards; International Fund for Agricultural Development (IFAD), ‘Conflict of Interest of Consultants and Widening the Pool of Evaluation Specialists’; Asian Development Bank, ‘Operations Evaluation Department (OED) Guidelines to Avoid Conflict of Interest in Independent Evaluations’, April 2005, available at: http://www.adb.org/documents/guidelines/evaluation/independent-evaluation.pdf; and the World Bank, ‘Consulting Service Manual 2006: A Comprehensive Guide to the Selection of Consultants’, Washington DC, 2006, available at:  http://siteresources.worldbank.org/INTPROCUREMENT/Resources/2006ConsultantManual.pdf.

It is good practice to share the curriculum vitae of the potential candidates with wider stakeholders and partners before engagement. This will help ensure that there is no potential conflict of interest or objection to the selection. Check references by talking to colleagues and partners who have worked with the candidates before to verify their competency as evaluators.

Step 3: Managing the conduct of evaluation

Briefing and supporting the evaluation team

It is often misunderstood that safeguarding independence of evaluation means not interfering with the evaluation teams. On the contrary, the success of the evaluation depends on the level of cooperation and support rendered by the commissioning unit to the evaluation team. Key roles of the commissioning unit and the task manager include the following:

  • Brief the evaluators on the purpose and scope of the evaluation and explain expectations from UNDP and its stakeholders in terms of the required standards for the quality of the process and the evaluation products. Provide them with relevant evaluation policy guidelines including the quality standards for evaluation reports, UNDP evaluation policy, and UNEG norms and standards for evaluation in the UN system.44 In particular, evaluators must understand the requirement to follow ethical principles as expressed in the UNEG ethical guidelines for evaluators by signing the Code of Conduct for Evaluators in the UN system.45 
  • Ensure that all information is made available to the evaluators. If they encounter any difficulty in obtaining information that is critical for the conduct of evaluation, provide necessary support to the extent possible.
  • If asked by the evaluators, provide a preliminary list and contact information of stakeholders whom the consultants should meet. However, the evaluation consultants are ultimately responsible for identifying whom to meet and UNDP cannot interfere with their decision.
  • Organize a forum to introduce the evaluation team to the partners and stakeholders to facilitate the initial contact. The evaluation team can also take this opportunity to receive inputs from the stakeholders in the formulation of the evaluation questions, seek clarifications in the ToR, and exchange ideas about the ways in which the evaluation will be carried out.
  • Arrange interviews, meetings and field visits.
  • Provide comments on and quality assure the work plan and the inception report (if existing) with elaborated evaluation methodology prepared by the evaluation team. 
  • Ensure security of consultants, stakeholders and accompanying UNDP staff, particularly in crisis situations. The evaluation team members should have passed relevant UN security exams and be aware of and compliant with related security protocols.
Tip: There is a delicate balance between providing adequate support for the evaluation and maintaining the independence of the exercise. While UNDP is expected to organize meetings and visits, UNDP or government staff working for the organization responsible for the project or programme should not participate in them, as interviewees and participants might not feel comfortable to speak freely in their presence.

Reviewing the inception report prepared by the evaluation team

Evaluators will commence a desk review and preliminary analysis of the available information. Based on the ToR, initial meetings with the UNDP programme unit or evaluation manager, and the desk review, evaluators should develop an inception report. The description of what is being evaluated illustrates the evaluators’ understanding of logic or theory of how the initiative is supposed to work, including strategies, activities, outputs and expected outcomes and their interrelationships. The inception report should include, inter alia:

  • Evaluation purpose and scope—A clear statement of the objectives of the evaluation and the main aspects or elements of the initiative to be examined.
  • Evaluation criteria and questions—The criteria the evaluation will use to assess performance and rationale.
  • Evaluation methodology—A description of data collection methods and data sources to be employed, including the rationale for their selection (how they will inform the evaluation) and their limitations; data collection tools, instruments and protocols and discussion of reliability and validity for the evaluation (see Annex 5); and the sampling plan, including the rationale and limitations.
  • Evaluation matrix—This identifies the key evaluation questions and how they will be answered by the methods selected (see Annex 3).
  • A revised schedule of key milestones, deliverables and responsibilities.
  • Detailed resource requirements tied to evaluation activities and deliverables detailed in the work plan.
Note: Good practice—The commissioning unit and key stakeholders should review and assure the quality of the inception report. The inception report provides an opportunity to clarify matters—such as resource requirements and deliverable schedules—at an early stage of the evaluation exercise and ensure that the commissioning party, stakeholders and the evaluators have a common understanding on how the evaluation will be conducted.

Reviewing the draft evaluation report

Once the first draft of the evaluation report is submitted, the evaluation task manager with key evaluation stakeholders should assure the quality of the report and provide comments. UNDP programme units may call for evaluation experts or the advisery panel to assess the technical rigor of the evaluation. The evaluation report should be logically structured; contain evidence-based findings, conclusions, lessons and recommendations; and be presented in a way that makes the information accessible and comprehensible. It should meet the criteria outlined in Box 35.

Box 35.  Criteria for evaluation reports
A quality evaluation report should: <
  • Be well-structured and complete
  • Describe what is being evaluated and why
  • Identify the questions of concern to users
  • Explain the steps and the procedures used to answer those questions
  • Present findings supported by credible evidence in response to the questions
  • Acknowledge limitations
  • Draw conclusions about findings based on of the evidence
  • Propose concrete and usable recommendations derived from conclusions
  • Be written with the report user and how they will use the evaluation in mind
Source: UNEG, ‘Standards for Evaluation in the UN System’, 2005. Available at: http://www.unevaluation.org/unegstandard

The evaluation report quality standards provided in Annex 3 can be used as a basis for assessing the quality of the report. If shortcomings exist and there are questions about the methodological rigor, UNDP programme units should ask the evaluators to improve the report.
Depending upon the complexity of the evaluation findings, the programme unit should consider organizing a stakeholder meeting at which the evaluators make a presentation to the partners and stakeholders. This helps ensure that there is a common understanding of the findings, facilitates feedback on the report draft, and fosters ownership and future use of the evaluation. When soliciting comments from stakeholders, care must be taken to safeguard the independence of judgements made in the evaluation. Evaluation is an independent exercise. Comments should be limited to issues regarding the applied methodology (see Chapter 7 on evaluation design for more guidance) and factual errors and omissions.

At this point, the programme unit should also start discussing with key stakeholders the preparation of the management response, for example, who will be involved in the preparation; when, how and to what degree; and what issues should be highlighted.

Step 4: Using the evaluation—management response, knowledge sharing and dissemination

Preparing the management response for decentralized evaluations

As one way of ensuring an effective use of evaluation, UNDP has institutionalized a management response system (a template is provided in Annex 6). Programme units are responsible for preparing a management response to key issues and recommendations raised in evaluations, and identifying key follow-up actions, responsible units for implementation, and estimated completion dates for these actions.
To foster learning and sharing of knowledge, the process of developing a management response should engage all key evaluation stakeholders to reflect on the key issues, findings and recommendations. In this process, follow-up actions and their associated responsible institutions and time-frames are collectively identified and agreed upon. In preparing the response, UNDP, partners and other stakeholders should not only look at internal management issues such as the delivery and timing of inputs and outputs but also respond to issues raised with regard to UNDP contributions towards development results and focus on strategic issues.

Note: Good practice—Once the management response is finalized and endorsed by stakeholders, it is posted for public viewing in ERC for transparency and accountability reasons. The programme units are responsible for regularly updating the implementation status. Units exercising the oversight responsibility (for example, regional bureaux for country office evaluations) monitor the implementation of follow-up actions in ERC.

The preparation of a management response should not be seen as a one-time activity. Learning emanating from the management response process should be documented and reflected upon when designing a new project or programme or defining an outcome. There is often little incentive to prepare a management response to terminal evaluations when the project is operationally closed. However, the process of developing a management response to terminal project evaluations allows key stakeholders to reflect on the project results and generate lessons that are applicable beyond a particular project. It also supports UNDP accountability by being responsive to the evaluation findings and responsible for follow-up actions. For these reasons, the evaluation policy requires management responses to all evaluations regardless of the status of the initiative that was evaluated.

Knowledge sharing and dissemination

The evaluation process does not end when the evaluation report is complete. In fact, learning and active use of knowledge generated from the evaluation is the most important element of the evaluation exercise. Time and resources required for effective follow up and learning should be allocated at the outset of the programme and project design.

Reviewing evaluations prior to the new planning process

Lessons learnt and knowledge generated from evaluations should be reviewed together with national stakeholders and partners to ensure they are incorporated in the design of new programmes and projects. This systematic application of knowledge from evaluations is a key element of MfDR. For more guidance on knowledge sharing and learning from evaluation, see Chapter 8.