8.3 Knowledge products and dissemination
Knowledge products can take many different forms depending on the audience and their information needs. For meaningful learning and knowledge sharing, knowledge products should be of high quality with a clearly identified audience and purpose. The characteristics of a good knowledge product, including a good publication, are listed in Box 43.
Box 43. Characteristics of a good knowledge product
Source: UNDP, ‘Ensuring Quality Control and Policy Coherence: BDP Quality Assurance and Clearance Process’, Bureau for Development Policy, May 2007. Available at: http://intra.undp.org/bdp/clearance_process.htm.
Keeping these characteristics in mind before the starting analysis or preparing a knowledge product will help organize the evidence in an orderly fashion.
The dissemination is as important as the development of knowledge products. Only an efficient system of dissemination will ensure that the target recipients receive the monitoring and evaluation feedback that is relevant to their specific needs. Some of the most commonly applied dissemination methods for monitoring and evaluation products include: printed reports, HTML or PDF copies of the products shared on the internal and external Internet sites and through e-mail message and list-serves, and CD-ROMs. The media can be a powerful partner in disseminating findings, recommendations and lessons from evaluation. In many countries, the media has played a critical role in advocating for accountability and addressing sensitive issues.
The following are practical steps for developing knowledge products from monitoring and evaluation and disseminating them.
Step 1: Identify target audiences and their information needs
Some of the commonly identified key target audiences for evaluation reports and knowledge products are the following:
Those responsible for knowledge sharing and dissemination should assess the information needs of the various groups, including when the information is most needed and is likely to serve as an ‘agent of change’. For example, government counterparts may find certain information from an evaluation particularly useful in making critical policy decisions. When planning for a monitoring and evaluation exercise, the commissioning unit should be aware when the ‘window of opportunity’ for decision making arises and make the information available in a manner that is appropriate for the technical and functional needs of the target audience.
Step 2: Collect stakeholder contact information
The success of every dissemination effort is highly dependent on the recipient contact information gathered during the monitoring and evaluation processes. For example, the evaluation team members meet with key stakeholders and national counterparts who, regardless of their degree of involvement in the evaluation topic, constitute a critical audience and should be informed about the knowledge generated from evaluation. The contact information of these individuals should be gathered by the evaluation team and shared with those responsible for disseminating and sharing the knowledge.
Step 3: Determine types of products that meet the audience’s information needs
In addition to publishing information from regular monitoring reports and evaluation reports, a mix of knowledge products can be developed to meet the information demand of different groups. A systematic assessment of the needs and demand for specific products among targeted audiences can be undertaken to ensure the relevance and value of the products. The following are some examples of communication means and products for evaluation:
It is the responsibility of UNDP to ensure relevant and high quality knowledge products are produced in a timely manner. In order to safeguard the integrity and accuracy of the evaluation information, the commissioning units may consider including the task of producing these knowledge products in the ToRs of the evaluation team.
Step 4: Identify language requirements per product and audience
In order to optimize the impact of knowledge sharing and dissemination efforts, knowledge products should be translated into local languages whenever possible. If resources are limited, the commissioning unit may determine language requirements per knowledge product or per audience group. At a minimum, the evaluation brief should be translated into the most widely used local language. Additionally, the language used in the product should be appropriate for the technical levels of targeted audience. It is best to avoid technical jargon and be wary of heavy acronym usage.
Step 5: Determine efficient forms and dissemination methods per evaluation knowledge product
Most evaluation reports and knowledge products can be shared as an electronic copy. In order to enhance the efficiency in terms of time and cost, the organization’s public webpage and the e-mail list should be strategically used as means for dissemination (see Box 44). For example, the evaluation reports should be uploaded on the organization’s internal and external webpage with a blurb that summarizes the key information in the report.
Box 44. Tools and networks to support evaluation knowledge sharing
Evaluation Resource Centre: The ERC, available at erc.undp.org, is a repository of evaluation reports and serves as the organization’s primary tool for knowledge management in evaluation. To date, it contains more than 1,000 evaluation reports and 400 evaluation ToRs. Reports can be searched by region, country, evaluation type, year and other key words. It also provides a list of evaluation focal points across UNDP to foster information exchange and learning on evaluation.
Knowledge products by policy and practice bureaux in Headquarters (BDP, BCPR and Partnership Bureau): Policy and practice bureaux in UNDP Headquarters produce a number of knowledge-based products in UNDP core results areas and their respective focus areas. Lessons from evaluations provide useful inputs to their ongoing work on knowledge consolidation and sharing.
Knowledge networks and communities of practice: In UNDP, there are networks and communities of practice that are linked to the UNDP worldwide system of sub-regional resource facilities and regional centres. Evaluation managers or UNDP communications officers can share evaluation reports or other related knowledge products with colleagues throughout the organization by submitting it to a practice-area knowledge network, such as the Governance Network (dgp-net) or the Poverty Network (pr-net).
The Evaluation Network or ‘EvalNet’: This functions more directly than the corporate knowledge management system to support the design and development of information and knowledge products from monitoring and evaluation activities. This network remains largely driven by stakeholder participation. EvalNet is a group of UNDP staff, mainly from country offices, that participate in UNDP evaluations, develop RBM tools and methodologies, and organize evaluation capacity development activities. The objectives of the network are to enhance UNDP as a learning organization and to promote results-oriented monitoring and evaluation as part of the UNDP organizational culture.
Additionally, knowledge from monitoring and evaluation can be shared widely by incorporating them in existing reports and publications, such as the country office’s annual report or other key reports, brochures and news bulletins.
Step 6: Monitor feedback and measure results of dissemination efforts
There should be a feedback and learning mechanism for the effectiveness of the dissemination strategy and quality of the particular knowledge product. For example, UNDP may conduct a quick survey among the recipients of the knowledge products or develop a feature on its website where users can provide their feedback directly online.
In analysing the feedback, the following should be asked: To what extent has the monitoring and evaluation information been used in programming and policy making within and beyond UNDP? Has such information been made in a timely manner to effectively influence decision-making processes? Have the products reached both direct and indirect audiences in an efficient manner and were they easily accessible? Did the audience find the knowledge products useful? If not, why not? What could be done better next time?
Lessons from the experience should be reflected in the future evaluation knowledge sharing and dissemination efforts so that evaluations in UNDP will continue to be relevant and contribute to organizational learning and the enhancement of a global knowledge base in development.