Indicator Search

Select Logic Model Components and Categories to filter content. To select multiple filters per box, hold the 'control' key while selecting. To export the indicators click the check box to select the item(s) and click the button to export.

Number Returned: 77
Operations
ID Indicator Category Sub Category Logic Model Component Short Definition Long Definition Data Requirements Data Sources Disaggregation Frequency of Data Collection Purposes Issues and Challenges Related Indicators Sample Topics and Questions for Data Collection Instruments Resources Pages in the Guide Published Year Adaptation of Indicator for Specific Purpose (Illustrative Examples) Data Type(s) Intended Use Other Relevant Information Last Updated Date Indicator Snapshots
1 Organizational knowledge audit conducted in the last five years Tracks whether or not an organizational knowledge audit has been conducted to determine an organization’s knowledge assets, gaps, and challenges This indicator refers to an audit conducted within an organization to determine organizational knowledge assets, gaps, and challenges, and to develop recommendations for addressing them through training, enhanced communication, or other improvements (Asian Development Bank, 2008). Self-report of KM audit within the last five years; evidence of knowledge assessment; KM audit score; documentation of knowledge assets, gaps, challenges, and recommendations Administrative/programmatic records, such as a knowledge assessment report) Periodically (e.g., every 3 to 5 years) A KM audit allows an organization to assess their existing tacit and explicit knowledge, so they may better tailor and design KM initiatives to meet the needs of the organization and its intended users. The defining feature of a knowledge audit is that it places people at the center of concerns: it aims to find out what people know and what they do with the knowledge they have. It can be described as an investigation of the knowledge needs of an organization and the interconnectivity among leadership, organization, technology, and learning in meeting these. (Asian Development Bank, 2008). It may be difficult to know where to begin implementing KM activities. An internal KM audit can help identify the key knowledge needs, sources, strengths, opportunities, and challenges within the organization. The results should enable the staff to create a “knowledge inventory,” which is a directory of the locations of knowledge products and services available to the staff— including details about their purpose, accessibility, and intended audiences—as well as information about which working units or groups of people have specific knowledge that might be useful to others. The inventory will also list knowledge gaps and help staff members to clearly understand their own roles and expectations, and those of the organization, and to determine what improvements should be made to the KM system (Asian Development Bank, 2008). Staff members can then work as a team to strengthen KM capacity and help to shape an organizational environment that supports KM. A knowledge audit can be performed by organization staff (i.e., a self-assessment) or by a third party. In either case, information obtained by a KM audit will provide insight and evidence about a number of topics, including: • The organization's definition of knowledge management • Tacit and explicit knowledge assets of the organization and where they are located • Where the organization places KM activities in the organizational structure • Whether (and how) staff members bring external knowledge back to the organization and use it internally • Whether staff members think that technology is used appropriately to record, organize, and exchange knowledge • How much support for KM—financially and in word/deed—exists among senior management • How knowledge is created, identified, organized, and/or used • How knowledge flows within the organization • What barriers obstruct the flow of knowledge • Where there might be missed opportunities for better knowledge capture, organization, sharing, and use • What difficulties or challenges project staff face with regard to knowledge creation, access, and use, and, conversely, what support does the organization provide • What knowledge gaps exist within the organization • How (and how well) the organization’s knowledge (including that of staff members) is transferred to audiences • Where Are You Now? A Knowledge Management Program Self-Assessment. (APQC 2011), http://www.k4health.org/toolkits/km/where-are-you-now-knowledge-management-program-self-assessment. • Learning to Fly (Collison and Parcell 2004) • KM Capacity Assessment Tool (Appendix 2 on p. 79 of the GHKC KM M&E Guide PDF version) 18-19 2013 Binary (yes/no), qualitative Wednesday, September 6, 2017
2 Number of instances where health knowledge needs assessments among intended users are conducted Captures the instances where assessments are conducted to identify gaps between current and desired conditions This indicator refers to a needs assessment, which is a systematic process for identifying gaps between current and desired conditions and determining how to close them. It involves taking inventory of needs, prioritizing them, and developing solutions to address them (Altschuld & Kumar, 2009; Gupta, 2007). In the context of KM for global health, there are two main levels of users: a) in-country partner organizations and b) their clients, health-care consumers. Thus, conducting knowledge needs assessments among in-country partner organizations helps the in-country organization become aware of its knowledge assets/needs and helps the partner organization see where support to KM would be most beneficial for the partner and the clients they serve. Self-report of number and type of needs assessments conducted Administrative/programmatic records A number of methodologies can help technical assistance projects understand the KM needs of their in-country partners/clients. These include environmental scans, literature reviews, key informant interviews, focus group discussions, surveys, and network mapping. Network mapping, or Net-Map, is a social mapping tool in which respondents work with interviewers to address a key question and create a network map of actors related to the question or topic of inquiry. Annually A health knowledge needs assessment among intended users is an important first step in planning KM activities and/or technical assistance. It helps organizations and projects determine their knowledge resources, knowledge flow, and knowledge needs and captures the current capacity of KM systems in a certain country, region, community, or topic area, such as among HIV/AIDS policy-makers. An assessment of current capacity informs the design of activities to strengthen and improve the systems of the in-country partner (K4Health, 2011). Once organizational or partner needs and problems are clearly defined, resources can then be dedicated to closing knowledge gaps and developing practical solutions. The information generated by a knowledge needs assessment is context-specific. Therefore, a new needs assessment should be conducted in each new setting—country, region, etc.—and with each group of intended users, such as program managers and policy makers. Furthermore, when conducting an assessment of KM in the health-care system, it is important to examine its various administrative levels—national, regional, district, and community, for example—to understand the differing needs at each level, current information flows with and between levels, and barriers to and opportunities for knowledge exchange between levels of the health system. Project staff can collect data about knowledge gaps, health information networks, preferred methods of communication, existing tools and technology, flow of information, barriers to knowledge exchange, and current infrastructure (K4Health, 2011). Considering the quickly changing nature of technology and potentially limited access to it in low- and mid-income countries, knowledge needs should be continuously monitored to ensure that KM programs are taking advantage of new and improved technology, as appropriate. For detailed guidance for each of the methodologies mentioned above (Issues and Challenges), see the K4Health Guide to Conducting Health Information Needs Assessments: http://www.k4health.org/resources/k4health-guide-conducting-needs-assessments. Further instructions on Net-Map can be found at http://netmap.wordpress.com/. 20-21 2013 Count, qualitative Wednesday, September 6, 2017
3 Number and type of user feedback mechanism(s) on knowledge needs used Captures the mechanisms used to collect feedback on knowledge needs and preferences from users of KM outputs This indicator captures the number and types of mechanisms used to collect feedback from users of KM outputs. These mechanisms might include surveys, questionnaires, interviews, rating forms, opinion polls, focus group discussions, and usability assessment/testing. In this context the feedback process involves the application of user comments and opinions about the usefulness and usability of KM outputs to improve outputs and/or to develop new activities. Self-report of number of user feedback mechanisms used, by type Administrative records Semiannually This indicator measures the various ways in which feedback is collected from intended users. Using multiple methods to collect this feedback ultimately leads to higher quality data. Using a wide range of methods to collect data can help address different user preferences for providing feedback. For example, a user may not want to respond to an online survey. The survey could offer an option to email from a website or print feedback form and mail in order to reach users who might otherwise not complete the survey. Since these data are disaggregated by type, this indicator can also help an organization identify what vehicles are most useful for collecting user information and adjust their approaches accordingly. 21 2013 Count, qualitative Wednesday, September 6, 2017
4 Whether user knowledge needs/feedback was used to inform design and implementation of products and services Captures the application of data on user needs/feedback to develop, improve, or implement KM products and services This indicator refers to the application of data on current or intended user needs or feedback to develop, improve, or implement KM products and services. This indicator can apply to both new and existing products and services. Its purpose is to assess whether evidence on user needs and preferences is influencing the direction of activities. Self-report of types of updates and changes made to KM products and services as a result of information from current or intended users about their knowledge needs or views of these products and services Feedback forms or surveys among current or intended users Semiannually A continual feedback loop is intended to increase access to and use of knowledge outputs by making them more responsive to the needs of the intended users. For example, a website may contain a feedback form for users to comment on the navigation, design elements, number of clicks to reach a resource, usefulness of content, or the way in which knowledge is synthesized. This information can then be used to inform the design and function of the site. For example, users may comment that certain important resources in a website are hidden and require too many clicks to find. The website manager can consider highlighting these resources on the home page and/or create an easier navigation path. Feedback can provided about an entire program or its parts, such as the delivery of eLearning, the ability to access online resources in remote locations, or the relevance of materials. This indicator reflects whether the needs and wishes expressed by stakeholders are guiding a program’s KM activities. User demand should drive KM and knowledge exchange activities (World Bank, 2011). However, individuals do not always know what the gaps they have in their knowledge. In other words, they do not always know what they do not know. To circumvent this problem, it can be helpful to start with questions about implementation challenges. Answers to questions such as “What would you like to do that you are unable to do?” or “What would you like this product to do that it does not do?” will provide insight into knowledge gaps and challenges that users face. By then, asking users what knowledge would help them solve the problems they have identified, organizations can see what gaps exist and work to develop knowledge exchange solutions to address users’ specific needs. 21-22 2013 Binary (y/n), qualitative Wednesday, September 6, 2017
5 Number of key actionable findings, experiences, and lessons learned captured, evaluated, synthesized, and packaged Captures the instances where findings and lessons learned are converted into usable formats to meet user needs This indicator captures the number of instance where findings and lessons learned are converted into usable formats to meet user needs. It is the documentation of knowledge that can be applied to improve practice. This is usually an internal indicator, although it might occasionally apply to assessing the progress of a KM activity with a partner in the field. This indicator was a USAID Population and Reproductive Health sub-result. In the context of global health, findings are made “actionable” when they are interpreted and packaged in a way that helps users understand and appreciate their implications for program activities. “Experiences” are defined as “active participation in events or activities, leading to the accumulation of knowledge or skills” (Houghton Mifflin Company 2000). “Lessons learned” are “generalizations based on evaluation experiences with projects, programs, or policies that abstract from the specific circumstances to broader situations.” Self-report of the number of findings, experiences, and lessons learned Administrative records Semiannually Understanding and responding to field needs is central to the practice of KM for global health. In order to do this, though, it is necessary to first document results, experiences, and lessons learned. Knowledge in the field can manifest itself in a variety of forms. (see the list of KM outputs under indicator 6). To determine the most appropriate form for documentation, the type of knowledge (tacit/explicit) must be considered as well as the purpose of the knowledge transfer (socialization, externalization, combination, and/or internalization) (Nonaka & Takeuchi, 1995). Generally, the best forms are those that make knowledge readily accessible and applicable to intended users so that it can be disseminated and validated in new contexts (USAID, 2012). For example, high-impact practices (HIPs) in family planning are practices identified by technical experts as promising or best practices that, when scaled up and institutionalized, will maximize the return on investments in a comprehensive family planning strategy. This information has been packaged as a series of briefs that can be easily distributed to and understood by service providers, program managers, policy makers, and other implementers who can put this knowledge into practice. This is an example of evaluating and packaging findings to inform decision making and improve global health practice. (For more on HIPs, please see http://www.fphighimpactpractices.org/.) 23-24 2013 Count, qualitative Wednesday, September 6, 2017
6 Number of new KM outputs created and available, by type Captures the instances where new KM outputs, including products, services, and other knowledge resources, are created and made available to users This indicator captures the number of instances where new KM outputs, including products, services, and other knowledge resources, are created and made available to intended users. In KM, the term “output” refers to a tool for sharing knowledge within the organization and/or with the clients. Outputs can take many forms, including products and services, publications and other knowledge resources, training and other knowledge-sharing events, processes and procedures, and techniques. Self-report of number of new outputs, by type Administrative records Semiannually In any field—and global health is no exception—the creation of new knowledge is imperative. The process of knowledge creation promotes communication across the field and leads to the implementation of innovative activities. In highlighting the number of new outputs, this indicator reflects the generation and synthesis of knowledge. This guide identifies a wide range of outputs and categorizes them into four main areas below: • Products and services – such as websites, mobile applications, applied technologies, and resource centers • Publications and resources – such as policy briefs, journal articles, and project reports • Training and events – such as workshops, seminars,and mentoring sessions • Approaches and techniques – reviews, reporting, and communities of practice To track KM outputs, it is important to cover and address all of these areas in a holistic manner. Illustrative examples of more specific indicators are as follows: • Number of new mobile applications developed • Number of new research briefs written • Number of new eLearning courses completed • Number of new knowledge sharing techniques developed 24 2013 Count, qualitative Wednesday, September 6, 2017
7 Number of KM outputs updated or modified, by type Captures the instances where changes/improvements are made to existing KM outputs, including products, services, and other knowledge resources This indicator captures the number of instances where changes are made to existing KM outputs, such as products, services, and other knowledge resources, to meet changing needs of users. This indicator is a complement to indicator 6 (number of new KM outputs created and available, by type). Self-report of updated or modified resources (either number of updates or, for continuously updated materials, descriptive information), by type Administrative records Semiannually In addition to measuring new outputs, it is also important to ensure that existing outputs are kept up-to-date to include the latest research findings and lessons learned from the global health field. Both written publications and online resources can be updated. Some online resources, such as publication databases, are continuously updated. In the case of websites, including the date the content was last revised can show users how current the information is. There are also organizations that evaluate health information, such as The Health on the Net (HON) Foundation, which applies the HONcode (The Health on the Net Foundation Code of Conduct) to medical and health websites. (For more information, see the HON website: http://www.hon.ch/). In addition to adding research findings and lessons learned, one might need to respond to changing content needs in the field, such as a new disease outbreak in a region or the introduction of a new or new use of information technology, such as SMS used to return HIV test results to clinics. Knowledge generation is a continuous process, and KM resources/outputs should be designed as living tools that can be modified and supplemented as needed. These updates and modifications keep KM outputs valuable to users and help ensure that they continue to have an impact on programs. 24-25 2013 Count, qualitative Wednesday, September 6, 2017
8 Number of KM coordinating/collaborating activities, by type Captures the coordinating/collaborating activities used to share knowledge, both within and among organizations This indicator captures the activities of coordinating/collaborative group structures used to share knowledge, both within and among organizations. This indicator counts a variety of knowledge sharing activities and can cover both virtual communication, such as online communities of practice (COPs), and face-to-face communication. Self-report of number of activities, by type Administrative records Semiannually The purpose of this indicator is to capture the number of activities conducted that allow colleagues—either within organizations or from different organizations working on similar topics—to connect, share experiences and lessons learned, develop common guidelines, or exchange ideas and research findings. A possible benefit of such activities is the opportunity to come to consensus on issues, chart the course of a particular effort, and provide a forum for prioritizing activities. The number of activities can sometimes be difficult to define. For example, an online forum might be one activity or a series of activities. However the organization or COP chooses to define these events, it is important to consistently count across the organization and across different activities. Professional contacts, such as those measured by this indicator, can help transfer tacit knowledge, which otherwise can be difficult to record and share with others. The sharing of tacit knowledge, which is based on direct experience, usually occurs person-to-person and, therefore, depends greatly on the interaction of individual human beings (Alajmi, 2008). Through storytelling and similar methods, professional groups and COPs are often the forums for sharing tacit knowledge within and across organizations (Schubach, 2008). The social nature and shared context of some of these groups promotes common understanding and encourages active engagement—that is, people’s openness and willingness to share their own experiences and to respond to those of others—and continual learning (Athanassiou, Maznevski, & Athanassiou, 2007; Schubach, 2008). 26 2013 Count, qualitative Wednesday, September 6, 2017
9 Number of training sessions, workshops, or conferences conducted, by type Captures the number of activities conducted for the purpose of sharing knowledge and/or improving KM skills This indicator refers to the number of activities, led by an organization, among either internal or external users, for the purpose of sharing knowledge and/or improving KM skills. “Training” is defined as knowledge transfer conducted in order for individuals to gain KM competence or improve skills (Nadler, 1984). Self-report of the number of training sessions, workshops, and conferences conducted, by type Qualitative information should also be reported wherever possible. Administrative records Semiannually This indicator focuses on what type and how many KM training sessions, workshops, and conferences, for example, are conducted to share knowledge or improve KM skills. These activities can be conducted either online or face-to-face and with either internal or external users who are KM practitioners or are the ones who make decisions about an organization’s KM activities. Such events seek to share information, tools, and resources that can improve the KM skills of individuals and/or organizations. These sessions can help strengthen KM capacity within and among organizations. These events can be useful for sharing KM approaches widely, even if only certain project staff members participate directly. The participants can then hold internal trainings or de-briefings to share the information that they obtained in their organization. Internal training or de-briefing can help ensure that knowledge, tools, and skills are spread across project staff and not concentrated in the hands of a few. It is also important to evaluate the quality of these activities, including how much was learned and the ways in which processes or behaviors changed as a result of participation in these events. 27 2013 Count, qualitative Wednesday, September 6, 2017
10 Number/percentage of KM outputs guided by relevant theory Captures the instances where the theory—whether a KM or other relevant theory—is used to guide the development of KM outputs This indicator captures the number of instances that theory—whether KM theory or another relevant theory—to guide the development of KM outputs. Theory is “a set of interrelated concepts, definitions, and propositions that present a systematic view of events or situations by specifying relations among variables, in order to explain and predict the events or situations” (Glanz et al., 2008). Self-report of number of KM outputs guided by theories, name/type of theory used. Programmatic records, including planning/design records Annually Theories and models are essential for planning a number of public health activities, including KM. Since they are conceptual tools that have been developed, improved, and refined, they can help guide the systematic planning and implementation of programs, strategies, and activities (Ohio University/C-Change, 2011). Theory can provide structure on which to build a KM project or activity, particularly if you choose a theory based on the outcomes you hope to achieve. The application of relevant theory can help organizations plan more effective activities, which ultimately help meet overall health or development goals (Salem et al., 2008). Theories can either be more general and broadly applied across a number of activities, or they can be have a specified content or topic area (Van Ryn & Heaney, 1992). A number of theories can guide KM work. Choosing an appropriate theory to guide a KM initiative may be crucial to its success. Since the fields of KM and communication share some goals, and often share project staff, some theories used in KM work stem from the field of behavior change communication. For example, project staff may choose to tailor KM outputs based on the Stages of Change theory (Prochaska & DiClemente, 1984), which helps identify the user’s cognitive stage. The five phases of the theory are pre-contemplation, contemplation, preparation, action, and maintenance. Another theory useful to KM is the Diffusion of Innovation Theory. This theory proposes that people adopt a new idea/innovation via a five-stage process—knowledge, persuasion, decision, implementation, and confirmation (Rogers, 2003). Understanding at what point an intended user group is along this progression can help KM practitioners design strategies for knowledge sharing, learning, and promotion of new ideas and knowledge. 28-29 2013 Count, proportion, qualitative Wednesday, September 6, 2017
11 Number/percentage of KM trainings achieving training objectives Measures the extent to which KM trainings among staff, and in some instances COP members or partners, achieve training objectives This is an internal indicator, that measures whether KM trainings among staff—and in some instances COP members or partners—achieve training objectives. Those who design or conduct the training set the training objectives in terms of improved skills, competence, and/or performance of the trainees. Responses to training evaluations, specifically answers to questions about whether or not the training met its objectives; observer comments; trainee test scores, if available. Training records, training evaluation forms, notes of independent course observer, trainee test results Semiannually This indicator records whether the training has provided the KM skills and knowledge outlined in the course objectives. Ideally, these objectives would be designed to address gaps identified by the KM knowledge audit. In other words, this indicator can provide one way of gauging the degree to which an organization has acted on its knowledge audit (see indicator 1). For example, the KM audit may have found that many staff members do not use the organization’s information and knowledge resources. Training staff about internal KM tools, technologies, and strategies may help solve this problem. In this case, this indicator would measure whether the training led the staff members to increase their use of information/knowledge resources. Courtesy bias can often affect training participants' responses to evaluation questions. Assuring participants that evaluation responses will be kept confidential or made anonymous, by leaving names off evaluation forms, may encourage participants to respond more frankly. In addition, since evaluation forms are not always the best way of evaluating a training (due to a number of factors including courtesy bias, low response rates, and the difficulty of self-reporting on the effects of training that was only just received), other methods may be used to gauge learning and improvements in performance. For example, after training people to use an information technology, trainers could observe the trainees conducting a search on their own or use an online knowledge resource to track usage patterns. This observation could be conducted several weeks after training, if possible, as a measure of whether new knowledge was retained. 29-30 2013 Count, proportion, qualitative Wednesday, September 6, 2017
12 Number of instances of staff reporting their KM capacities improved, by type Captures the number of instances where project staff members report an improvement in their KM knowledge, skills, or abilities This indicator capture the number of instances in which project staff members report an improvement in their KM knowledge, skills, or abilities. At the organizational level, trends in the results of KM audits can be studied. Number of instances of staff reporting KM capacities improved, type of improvement Qualitative information should also be reported wherever possible. KM audits; performance reviews; pre/post tests; training evaluations; observations by other staff, that is, asking staff members if they think their colleagues’ KM capacities have improved and asking for examples; notes from after-action reviews; interviews with staff members. Semiannually Building on the results of the KM audit, this indicator (along with indicator 11) gauges the effects of efforts to strengthen internal KM capacity. Indicators 11 and 12 are direct follow-up indicators to indicator 1 (organizational knowledge assets assessed in the last five years), which gives staff members the opportunity to assess the growth of their own KM capacities. Once a KM audit has been performed and the organization understands its KM gaps and challenges, leaders can ensure that management puts financial resources and high-level support into improving KM systems overall; that management leads by example, investing their time in doing KM well; and that appropriate KM training is offered when needed. After the changes have taken place and staff members continue KM activities, they can report whether they feel their knowledge, skills, and performance have improved. The accuracy of this indicator depends on trust and clear and open lines of communication between management and the rest of the staff, to ensure that self-reports are honest. These conversations could even be made part of annual performance reviews between supervisors and staff. There are other ways of gauging improvements that may be less subject to bias, for example, changes in how often an internal knowledge sharing system is used or the formation of new internal COPs that meet regularly. 30 2013 Count, qualitative Wednesday, September 6, 2017
13 Number of KM approaches/methods/tools used, by type Captures the number of KM approaches, methods, and tools used that can facilitate knowledge sharing and use This indicator refers to the number of KM approaches, methods, and tools used that can facilitate and support learning, knowledge exchange, decision making, and action within an organization. Self-report of number of KM approaches/methods/tools used, by type Survey of staff, in-depth interviews with staff members, notes from after-action reviews, administrative records Semiannually In KM initiatives, it is important to use proven techniques to promote learning, facilitate knowledge transfer, and encourage collaboration. These processes sometimes require facilitation and/or specific tools. The choice of such tools will depend on the goals; intended users; available technology; facilitator availability/skills, if relevant; and the timeline of the KM project or activity. For example, if KM practitioners use an organizational approach to implementing KM, they may focus on how an organization can be structured or designed in order to maximize knowledge creation and exchange. KM practitioners may use research methods to capture data on a specific project or purpose. Some KM tools may be related to information technology, such as intranet or content management systems, while others may be less technology-based, such as collaborative tools like knowledge cafés or Open Space, which provide informal, creative spaces for groups of colleagues to interact and share ideas. (For more on Open Space, see http://www.openspaceworld.org). There are a wide range of KM techniques and tools that organizations and projects can use, including after-action reviews, world cafés, Open Space sessions, graphic facilitation, podcasts, twinning, role plays, simulation, storytelling, peer assists, mentoring, knowledge fairs, “fail fairs,” blogging, and online discussions (Lambe & Tan, 2008; World Bank, 2011; Ramalingam, 2006). Some KM methods—such as after-action reviews and mentoring—can be institutionalized and made part of the organizational culture. 31-32 2013 Count, qualitative Wednesday, September 6, 2017
14 Number of individuals served by a KM output, by type Captures the number of people that a KM output directly influences, disaggregated by type of output This indicator captures the number of people that a KM output directly influences. The type of KM output should be specified so data can be aggregated or disaggregated as needed. For instance, the number can represent people attending a meeting, seminar, or conference as well as those joining in a virtual learning or networking activity. This number could also represent the subscribers or recipients of a product, service, or publication. The indicator can be used to measure various kinds of outputs. Quantitative data from evidence that intended users—such as recipients, subscribers, participants, or learners—have received, registered, or participated in a particular KM output, whether in person or virtually. Mail (postal or email), contact, or subscriber lists; registration or attendance records; and other administrative records and databases Quarterly These data chart the initial reach of the KM output and identify which users were addressed. This is one of the most basic indicators for measuring reach. It is a simple way to gauge initial communication of and access to the KM output. The various ways data can be stratified can help profile the user. Supplementary information collected could include demographic and job-related characteristics of these individuals, such as the country/region where they work and their organizational affiliation, job function or type, gender, and level of education. Additional information may also include the type of dissemination or promotion and the communication channels used, such as print, in-person, electronic (either online or offline), or other media. How many times have you accessed the [Web product] in the past 3 months? (Select one.) o 0 times o 1-5 times o 6-10 times o 11-15 times o 16-20 times o 20+ times o Never heard of it 34-35 2013 • Number of registered learners in an eLearning service • Number of recipients who received a copy of a handbook via initial distribution • Number of registered participants in a training seminar • Number of fans and followers on social media accounts Count Wednesday, September 6, 2017 The Global Newborn Health Conference—held April 14-18, 2013 in Johannesburg, South Africa, and sponsored by the Maternal and Child Health Integrated Program—counted among its participants 70 senior government health officials from 50 countries. Since January 2012, MEASURE Evaluation hosted 29 webinars that covered seven topics related to the monitoring and evaluation of population, health, and nutrition programs. The webinars attracted 1,228 participants.
15 Number of copies or instances of a KM output distributed to existing lists, by type of output Captures the number and type of a KM output that has been distributed This indicator captures the number and type (such as document copies or email announcements) of a KM output that has been distributed. Use of existing lists indicates that this is an initial and direct distribution or dissemination from the original developer of the output, such as an organization or project, Distribution of the output can be by mail, in person, online, or via any other medium. Electronic distribution of copies includes various file formats, such as PDF, TXT, PPT, or HTML. Quantitative data on the number of hard/paper or electronic copies distributed by language, types/formats of the product, how/where the copies were distributed, and dates distributed. Administrative records Creating a database designed specifically to track distribution/dissemination numbers is helpful. Quarterly This is a direct and simple measurement of the quantity of an output (such as an email announcement or handbook) distributed. This indicator contributes to measuring reach. Due to the rapid advancement and growing availability of information and communication technologies in recent years, many organizations and projects have been shifting the scope and focus of their dissemination efforts from printing and distributing paper copies to using electronic channels. Electronic copies can be distributed to intended or potential users by email as attachments or as web links. While electronic distribution can potentially reach more people for a lower cost, poor internet access and low storage capacity on a mobile device or computer may limit the reach of distribution efforts. Measuring the types of outputs and channels used can help determine the efficiency and effectiveness of current distribution channels used. 35-36 2013 • Number of copies of an implementation guide distributed • Number of notifications emailed announcing a new issue of an online journal Count Wednesday, September 6, 2017 During the four-day Global Newborn Health Conference, the Twitter hashtag #Newborn2013 had an estimated reach of 2,979,300. It generated approximately 10,423,631 impressions and was tweeted over 1686 times by more than 650 contributors. Since 2007, 500,000 copies of the Family Planning: A Global Handbook for Providers have been distributed. The handbook is available in nine languages: English, French, Portuguese, Russian, Spanish, Swahili, Romanian, Hindi, and Farsi. Recently the handbook was made into an online resource, with digital downloads available for mobile devices. As a result, the number requests for paper copies have steadily decreased. Having the data and the timeline of dissemination helped explain why the changes that occurred were due to interest in the new distribution channel, not the lack of interest in the handbook.
16 Number of delivery methods used to disseminate content, by type Captures the number and type of delivery methods used to disseminate or promote content and messages This indicator captures the number and type of delivery methods used to disseminate or promote content and messages across a KM project or specific activity. It can apply to a wide range of methods, such as online sources, web tools, print copies, and electronic offline devices. Examples of electronic offline delivery devices include flash drives, CD-ROMs, netbooks, tablets, eReaders, mobile phone apps, and portable audio devices. Quantitative data on the number of media types used and number of copies of product distributed (see Indicator 15) through each method and different formats for each method, such as ePub and Kindle for eReaders and Android and iPhone for phone apps. Administrative records Creating a spreadsheet or list designed specifically to track distribution/dissemination numbers is helpful. Quarterly Organizations and projects implementing KM activities need to assess the effectiveness of the method mix by disaggregating monitoring data by delivery method; over time they may decide to add/reduce the types of media according to these findings. The strategy to select certain delivery method over others and/or to offer information through multiple methods should be based on thorough understanding of users. How did you first learn about the [Web product]? (Select one.) o Announcement (e.g., email, paper) o Colleague's referral o Internet search o Conference/meeting o Promotional materials (e.g., fact sheet, flyer) o Link from another website o Social media (e.g., Facebook, Twitter) o Other, please specify __________ 36 2013 Count Wednesday, September 6, 2017 MEASURE Evaluation uses 14 communication methods to share news, publications, presentations, events and conversations, including website, print and electronically produced publications, Monitor e-newsletter, Evaluate blog, SlideShare, YouTube, Twitter, Facebook, Flickr, LinkedIn, webinars, Knowledge Gateway, Google+, and Podcasts. Content from the Global Newborn Health Conference was distributed by at least 9 delivery methods, including live presentations, printed materials, Twitter, Facebook, websites, webcasts, email, Scribd digital document library, and blogs/blog posts.
17 Number of media mentions resulting from promotion Captures how many times a KM output has been mentioned in various forms of news media coverage This indicator captures how many times a KM output has been mentioned in various forms of news media coverage, such as print and online news sources, LISTSERVs, blogs/blog posts, television, or radio. A media mention usually indicates, to some degree, that the original output is recognized, credible, and considered authoritative. Quantitative data on the number of mentions in or on print, online, social media, television, or radio, and the numbers of people reached by each news media outlet, if available. Administrative records, media outlets, reports from clipping services, media monitoring services, and internet monitoring tools, such as Google Alerts and Yahoo Pipes Quarterly This indicator measures the media coverage of a KM output, or a group of KM outputs, and tracks the coverage to gauge the effect of reach, promotion, and outreach efforts. The media coverage can be about the KM output itself or about the issue or content featured in the KM output. News media coverage measures whether intermediaries thought their audiences would be interested and consider the issue important. Since the news media often help set political and policy agendas, an indicator of news media coverage can suggest whether policy makers might be influenced to give an issue greater priority. A news media strategy is a road map for reaching and influencing policy makers indirectly. An advantage of a media mention can be the potentially large population reached through this secondary/in-direct method of dissemination. However, the total impact may not be great if the mention is brief and most of the people listening/ watching are not interested. For web-based products, services, publications, and content, a web monitoring tool, such as Google Alerts or Yahoo Pipes, provides a quick and easy way to set up specific queries and monitor mentions in online media. A number of media monitoring services and software also cover print, television, social media, and other types of broadcasting. Several challenges may impede using a media coverage service. First, these services charge a fee, which may be beyond your project budget. Second, it can be difficult to capture all instances of media coverage, especially in broadcasts. A solution may be to organize staff when a news-making publication comes out, so they can monitor various new media outlets for coverage of the story. However, this means you need to have enough human resources to put toward this task. 36-37 2013 Count Wednesday, September 6, 2017 From July 2012 to June 2013, the K4Health project had 52 media mentions from promotion, meeting the annual project target of 50. Many of the media mentions were by various blogs managed by other global health organizations, such as the USAID Impact Blog; by news or announcements websites, such as News Medical; and by digital health, such as the Kaiser Daily Global Health Policy Report.
18 Number of times a KM output is reprinted/reproduced/replicated by recipients Collects the number of specific cases an organization or independent body—other than the one that originally authored, funded, produced, or sponsored a KM output—decides to use its own resources to copy or excerpt all or part of the KM output This indicator collects the number of specific cases in which an organization or independent body—other than the one that originally authored, funded, produced, or sponsored a KM output—decides to use its own resources to copy or excerpt all or part of the KM output. “Reprint” is a term specific to publications and other print resources, while “reproduction” can apply to products and services, and “replication” can refer to approaches and techniques. Thus, the number refers not only to print copies, but also to online copies in any online medium or even any other KM events or approaches. Quantitative data from requests for approval or permission to reprint, reproduce, or replicate, which indicate the number of items produced and, if applicable, which parts of those documents; and/or copies or other evidence of reprinting, reproduction, or replication. Administrative records, letters, emails, communication of request and acknowledgment, or receipts and online pages that track use and downloads of web-based products, such as open source content management systems Quarterly Reprints, reproductions, and replicated approaches demonstrate demand for a particular KM output and extend the reach of the output beyond what was originally feasible. An added value of this indicator is that a desire to reprint, reproduce, or replicate suggests an independent judgment that the KM output is useful and of high quality. A limitation of this indicator is that the original publishers or developers have to rely on what is reported or sent to them or what they happen to come across after reprinting and reproduction. It is not possible to know with certainty the extent of reprinting and reproduction, as some re-publishers think they would not receive permission to reprint, so they do not tell the original publisher their materials are being used. Also, it may be difficult to find out the extent of dissemination, the identity of the recipients, or the use of the reprint. These limitations apply to both online and print media. 37-38 2013 Count Wednesday, September 6, 2017 OpenAid is a website platform designed and built by the USAID-funded the Knowledge for Health Project to help small non-governmental organizations and international development projects quickly create cost-effective, program-focused websites (http://drupal.org/project/openaid). OpenAid was released in July 2012. As of June 2013, 60 different sites were using the OpenAid platform.
19 Number of file downloads Captures the number of times a file is downloaded from a website to a user’s own electronic storage medium This indicator captures the number of times a file is downloaded or content is transferred from a website to a user’s own electronic storage medium. Quantitative data from web server log files, web analytics, and/or content management system records Web server log files; web analytics software, such as WebTrends, Google Analytics, Piwik; content management system, such as Drupal and Joomla Quarterly Tracking the number of file downloads provides insight into which information products and topics website visitors most frequently save to their own electronic storage medium. In addition to tracking general trends, file download data can also help indicate how well promotional efforts and campaigns have reached online users. There are two ways to count downloads: by examining server logs or web analytics. Server logs are produced automatically on a typical web server, and can help staff distinguish between partial and completed downloads. However, content and communications staff may need assistance from hosting company or internal IT staff to access and understand server logs. A web analytics interface such as Google Analytics or the WordPress analytics plug-in uses tags and cookies to track web traffic and can be configured to track downloads. Once set up, this method requires less specialized IT knowledge than accessing or analyzing server log files. Analytics programs also often allow users to filter download data—for example, to see show the geographic location of users who download a specific file. While analytics programs are easier to use, they still require a certain level of expertise, a learning curve should be expected. For more information about Web analytics, see Appendix 3 on p.83. 38-39 2013 Count Wednesday, September 6, 2017 In the first quarter after launching social media channels, document downloads on the ICT for Ag community website (ictforag.org) increased by just under fivefold. The film In It to Save Lives: Scaling Up Voluntary Medical Male Circumcision (VMMC) for HIV Prevention for Maximum Public Health Impact (http://www.aidstar-one.com/focus_areas/prevention/resources/vmmc)—produced by AIDSTAR-One, funded by USAID, and managed by John Snow, Inc.—received a total of 3,789 plays between June 1, 2011 and June 30, 2012. Over 690 downloads were associated with the AIDSTAR-One VMMC materials. The film was downloaded from the AIDSTAR-One website 224 times, the discussion guide was downloaded 121 times, and the transcript was downloaded 123 times. The film was downloaded from 36 countries; the top five countries were United States, Kenya, Uganda, South Africa, and United Kingdom.
20 Total number of pageviews Captures the total number of times a page is viewed by a visitor This indicator captures the total number of times a page is viewed by a visitor. Pageviews are measured when a page’s tracking code is executed on a website. Quantitative data from web analytics Web analytics software, such as Google Analytics, Piwik, or WebTrends Quarterly Pageviews are a typical general measure of how much a website is used. In the early days of the Internet, use was measured in "hits." However, “hits" are no longer a meaningful measurement. A "hit" is a call to a web server for a specific action or element, but a modern web page is much more complex and can involve anywhere from one to hundreds of individual "hits." With pageviews, the trend is important, not the raw numbers. Watching a specific page's performance can be useful. For example, a spike in views of a specific page can indicate the success of a promotion. Web traffic varies greatly, depending on the size and scope of a website. If your site serves a small community of practice, do not compare your pageview count to a site that serves a broader audience. For more information about Web analytics, see Appendix 3 on p.83. 40 2013 Count Wednesday, September 6, 2017 The GNH Conference website (http://www.newborn2013.com/) was first launched in January 2013. It generated 29,562 pageviews up until May 5, 2013. Between June 1, 2011 and June 30, 2012, the materials page of the film In It to Save Lives: Scaling Up Voluntary Medical Male Circumcision for HIV Prevention for Maximum Public Health Impact (http://www.aidstar-one.com/focus_areas/prevention/resources/vmmc/resource_packet) generated a total of 5,666 pageviews. The VMMC landing page, with the embedded video, generated 1,204 pageviews from 89 countries. About 20 percent of all pageviews were visits from Africa. Since MEASURE Evaluation started using SlideShare in June 2008, the project’s 229 slides have received a total 174,162 pageviews. Most of the pageviews came from the United States (35,731), Bangladesh (4,975), Ethiopia (4,460), Nigeria (2,930), Kenya (2,859), and India (2,831).
21 Total number of page visits Captures the total number of “visits” or individual interactions with a website This indicator captures the total number of “visits” or individual interactions with a website. According to the Web Analytics Association’s web analytics definitions, a “visit” is an individual interaction with a website. If the individual leaves the website or does not take another action—typically requesting additional pageviews—on the site within a specified time, interval web analytics software considers the visit ended. Quantitative data from web analytics Web analytics software such as Google Analytics, Piwik, or WebTrends Quarterly Visits represent the number of times users have gone to and then left a website. A visit can range from a few seconds to several hours, and a single visitor can log multiple visits to a page, even in the course of a single day. Different web analytics programs define a visit differently. In general, a visit begins when a visitor views the first page of a website and ends when a criterion set by the analytics program is met, such as if the visitor does not click anything for 30 minutes. In addition to representing the volume of traffic to a website, visit numbers are used to compute other common indicators, such as average visit duration and average page depth—the average number of pages viewed during a visit to a website. Some organizations may find it useful to further qualify this indicator as it relates to key intended users, such as visitors from specific countries or regions. As with Pageviews, the trend of total visits is more important than the raw numbers. And, while the total number of visits can provide insight into the total number of times people consulted a site, it cannot distinguish between repeat and one-time visitors. For more information about Web analytics, see Appendix 3 on p.83. 40-41 2013 Count Wednesday, September 6, 2017 Since launching in February 2011, visits to the ICTforAg community website (ictforag.org) have grown steadily from 200 visits per month up to 1,000 visits per month, peaking at over 2,000 visits in January 2013. During the month of April 2012, the K4Health website (www.k4health.org) received 60,371 visits, an average of 2,012 per day. In the 2012 calendar year, 22% (40,250) of visits to K4Health toolkits came from USAID family planning priority countries.
22 Number of links to web products from other websites Captures the number of links, or URLs, located on another website that directs users to the publisher’s website This indicator captures the number of links, or URLs, located on another website that directs users to the publisher’s website. The referring website creates and maintains these links. Quantitative data from web analytics, webmaster tools, search engine optimization (SEO) tools Web analytics software, such as Google Analytics, Piwik, or Web Trends; webmaster reports, such as those from Google Webmaster Tools, Bing Webmaster Tools, or Alexa.com; SEO tools such as Majestic SEO or Open Site Explorer Quarterly The number of links and variety of referring sources directing traffic to an organization’s online information products indicate both reach and authority. If reputable websites link to an organization’s website or its online resources, one can reasonably argue that the destination resource has recognized the publisher’s authority on a given topic. Some search engines can provide information on what other websites link to a specific site. For example, searching in Google for "www.mysite.com" returns a list of URLs that provide links to www.mysite.com. However, data from search engines are far from comprehensive, as most search engines make only partial data available in order to maintain the confidentiality of their ranking algorithms and to deter spammers. A more comprehensive view may be available through webmaster tools provided by services like Google or Bing. Like webmaster tools, SEO tools directed at online marketing professionals can provide similar link data. However, most SEO tools cost in the range of $75 to $150 per month, which is out of reach for many programs and small organizations. For more information about Web analytics, see Appendix 3 on p.83. 41-42 2013 Count Wednesday, September 6, 2017 As of January 2013, 5,917 sources provided referral links to web pages on www.k4health.org. As of August 2013, 940 websites link to www.measure evaluation.org.
23 Number of people who made a comment or contribution Captures active sharing of programmatic experience and knowledge among people participating in KM outputs This indicator captures active sharing of programmatic experience and knowledge among people participating in KM outputs—usually those hosted online, such as professional network groups, communities of practice, forums, webinars, or social media blogs or sites like Facebook or LinkedIn. The online format makes data collection easy by digitally storing comments and contributions such as postings or materials uploaded into a platform. The number of contributors indicates how many have interacted with the other users and have shared their personal experiences, knowledge resources, or opinions with others. This count helps the organizer to assess the depth of user engagement. Quantitative data from sources that provide the number of participants, electronic records of postings from participants, identification of product or issue under discussion, characteristics of participants such as country/region where they work, organizational affiliation, job function or type, gender, and level of education. Qualitative data from content analyses of comments and contributions that provide more detailed information about user characteristics, types, themes of contributions Administrative records of comments posted via LISTSERVs, discussion groups, communities of practice, or social media tools Quarterly Counting attendance is a valid measure, but it does not indicate the degree of engagement. The total number of people in attendance includes people who contribute significantly, those who share a little, and those who listen without contributing, otherwise known as lurkers. Lurkers are usually the majority, especially in virtual settings. Direct user interactions indicate interest in the subject matter, which in turn speaks to the relevance of the KM output. In addition, contributions suggest that the users feel encouraged and comfortable contributing; thus, they have developed a sense of community and belonging in a particular group, which may stimulate further knowledge sharing. However, the indicator does not usually suggest how the user will use the information/product/output in the future or whether the information will continue to spread through the professional networks of the attendees and contributors. For more information about Web analytics, see Appendix 5 on p.92. 42-43 2013 Count Wednesday, September 6, 2017 During the LeaderNet webinar on blended learning, 275 participants logged on from 56 countries, sharing 308 posts in English, Spanish, and French. As of June 2013, there were 7,924 subscriptions to 11 communities of practice managed by MEASURE Evaluation. During the project’s fifth year (July 2012 – June 2013), 273 subscribers posted new insights and knowledge to the community LISTSERVs. In August 2013, MEASURE Evaluation shared a post on LinkedIn about the availability of M&E materials for trainers by MEASURE Evaluation. The post received 15 shares, 33 comments, and 16 likes in the Monitoring and Evaluation Professionals LinkedIn group. A blog post containing the same information received 21 Twitter shares and 16 Facebook shares.
24 Number/percentage of the intended users receiving a KM output that read or browsed it Measures the extent to which intended users have shown their interest in hearing messages or knowing more about content offered through a KM output This indicator measures the extent to which intended users have shown their interest in hearing messages or knowing more about content offered through a KM output. Quantitative data from self-reported information from intended users Bounce-back feedback forms; user surveys (print, online, email or telephone) distributed after dissemination or promotion of a KM output Annually This indicator distinguishes between the intended users who received a KM output and did not look at it and those who took the initiative to read or browse through the output. Often, a survey begins with a filtering question asking whether the respondent has read or browsed a KM output. The answer to this question determines whether the respondent is qualified to answer subsequent questions about the usefulness and relevance of the KM output. It also provides a basis for gauging interest in the output or its topic among intended users. Think of the last time you visited the [Web product]. What types of information resources were you looking for? (Select all that apply.) o Research/journal articles o Reviews/syntheses o Fact sheets/policy briefs o Implementation guides/handbooks o Job aids (e.g., wall charts, flipcharts, checklists, memory cue cards) o Communication materials o Visual media (e.g., illustrations, photos, graphics, charts) o Training curricula o Other, please specify _____________ 46 2013 Count, proportion Wednesday, September 6, 2017 For the Global Newborn Health Conference’s Scribd digital document library, there were 9,042 reads of conference-related material from April 1 to May 3, 2013.
25 Number/percentage of the intended users who are satisfied with a KM output Measures an intended user’s overall satisfaction with a KM output This indicator measures an intended user’s overall satisfaction with a KM output. The classification of “satisfied” indicates that the output met the intended user’s needs and expectations. It is related to the user’s perception of the relevance and value of the content as well as to the manner in which that content is delivered and presented. Quantitative data from self-reported information from intended users. Satisfaction can be gauged on a scale, such as a Likert scale, that asks users to rate various attributes of the KM output. Qualitative data from interviews and focus group discussions. Feedback forms (digital or print) and user surveys (print, online, email, or telephone). Interviews and focus groups discussions can capture further qualitative information. Annually Satisfaction is an overall psychological state that includes emotional, cognitive, affective (like/dislike), and behavioral responses to certain characteristics or to the output as a whole (Smith, 2012; Sullivan et al., 2007). Satisfaction with a KM output is an important predictor of user behavior. If users find the KM output satisfactory, it is likely that they will use the content and possibly change or adopt a new behavior, or make a different decision as a result of that content. In data collection instruments, the question about general satisfaction can be asked before more specific questions regarding aspects of usability and relevance. Please rate the following statements about the [Web product] layout and design: (1-Strongly disagree, 2- Disagree, 3-Not sure, 4-Agree, 5-Strongly agree) o The home page makes me want to explore it further.The layout and design is clear and visually appealing. o It is easy to navigate through the different sections. o I am able to find the information I am looking for. o Screens/pages have too much information. o Screens/pages have too little information. o It is as easy or easier to find the information I am looking for, compared to finding the same information in other online resources (e.g., database, website, etc.). o It is as easy or easier to find the information I am looking for, compared to finding the same information in print resources (e.g., books, journals, etc.). 46 2013 Count, proportion, qualitative Wednesday, September 6, 2017 A 2017 paper evaluating MSH's internal Technical Exchange Networks (TENs) stated that satisfaction with the communities of practice increased between 2015 and 2017. Specifically, satisfaction with the frequency of posts and discussions increased from a rating of 3.38 to 3.85 after implementing targeted changes to improve the user experience. Other indicators of satisfaction also improved including credibility and trustworthiness of content (+0.29) and quality of content shared through the TENs (+0.21).
26 User rating of usability of KM output Measures user attitudes toward and satisfaction with the format, presentation, navigation, searchability, and delivery of a KM output This indicator measures user attitudes toward and satisfaction with the usability of a KM output. “Usability” covers a wide range of characteristics, such as the format, presentation, navigation, and searchability, and delivery of a KM output. The terms “format” and “presentation” refer to the way design elements, content, and messages are laid out and organized. The term “format” refers more to technical and structural elements, while “presentation” refers more to the aesthetics. The user’s assessment of format and presentation influences an overall perception of usability. With web-based products, usability also includes navigation and the user interface. Quantitative data such as ratings can be collected using a scale, such as a Likert scale, to gauge reactions to statements related to writing style, design features, organization of the information, ease of finding information, appearance, and other aspects. Qualitative data can provide greater insight into user experience, attitudes, and preferences. Feedback forms or user surveys distributed with the KM output or after a KM output has been disseminated; interviews; focus group discussions; usability assessments Annually This indicator provides important information about whether intended users find a KM output to be usable, practical, logical, and appealing. The indicator also encompasses whether the organization or search functions of a KM output enable users to quickly find the information they want. To assess usability, it is helpful to conduct user surveys several months after a product or service has been disseminated, so that users have had time to use the product. For web-based products, accessibility and connectivity are important aspects of usability. To serve the broadest range of technological capacity, products delivered via the internet should consider designing the digital space (website or web page) for those who have low bandwidth by, for example, limiting the use of large graphical elements. Data collection instruments should address the loading times of web pages and downloads. Please rate the following statements about the [Web product] layout and design: (1-Strongly disagree, 2- Disagree, 3-Not sure, 4-Agree, 5-Strongly agree) o The home page makes me want to explore it further. o The layout and design is clear and visually appealing. o It is easy to navigate through the different sections. o I am able to find the information I am looking for. o Screens/pages have too much information. o Screens/pages have too little information. o It is as easy or easier to find the information I am looking for, compared to finding the same information in other online resources (e.g., database, website, etc.). o It is as easy or easier to find the information I am looking for, compared to finding the same information in print resources (e.g., books, journals, etc.). 47 2013 Categorical scale, qualitative Wednesday, September 6, 2017 K4Health conducted an interactive usability assessment of its website with 23 participants in order to examine how K4Health users would interact with the website and improve the user interface in the new design. Each participant was given a number of tasks and were observed by an interviewer/facilitator. The participants who browsed the site had a better completion rate of locating the particular resource material specified in one of the tasks compared to those who used the search box. Therefore, improving the search function and the relevancy of search results has become a priority area identified by the website team designing a new website.
27 User rating of content and relevance of KM output Measures the perceived quality of content in a KM output and its relevance to user needs This indicator measures the perceived quality of content in a KM output and its relevance to user needs. “Content” means the information or knowledge conveyed in a KM output, as distinguished from format and presentation. “Relevance” indicates that intended users find the information or knowledge applicable and important to their professional work. Quantitative data from responses to questionnaires regarding content quality, importance, usefulness, and relevance, etc. User ratings can be collected using scales, such as a Likert scale, to gauge reactions to statements. Qualitative data can provide greater insight into user experience, attitudes, and preferences. Feedback forms or user surveys distributed with the product or after a KM output has been disseminated and promoted; interviews; focus group discussions Annually It is crucial for organizations and projects to obtain feedback from intended users and gauge the overall usefulness and relevance of content in the KM output. Such information can guide further enhancement, refinement, and development of the output. Each user has a unique professional role, set of needs, or action focus, and, therefore, assessments of the quality and relevance of content may vary. Stratifying the data by user group will help to understand the various users and their needs. In people’s perceptions, quality and relevance are likely to be intertwined. Users are unlikely to find content to be high-quality unless it is relevant to their needs. Thus, it is important to know user perceptions of relevance in order to interpret their judgment on quality. Please rate the following statements about the [Web product] content: (1-Strongly disagree, 2- Disagree, 3-Not sure, 4-Agree, 5-Strongly agree) o The content is complete, offering comprehensive coverage of [global health topic]. o The content is credible and trustworthy. o The topics covered are relevant to my work. o The information is of equal or higher quality than information on this topic I can find in other online resources (e.g., database, website, etc.) o The information is of equal or higher quality than information on this topic I can find in print resources (e.g., books, journals, etc.). 47-48 2013 Categorical scale, qualitative Wednesday, September 6, 2017 The survey results of the LeaderNet webinar on blended learning revealed that 97% of respondents found the discussions useful or very useful for their work, and 99% rated the seminar resources (the Blended Learning Guide) as useful or very useful for their work.
28 Number/percentage of the intended users who recommend a KM output to a colleague Measures how many intended users recommend a KM output to a colleague This indicator measures how many intended users recommend a KM output to a colleague. A “recommendation” is an endorsement of the output, indicating the recommender’s judgment that the output is a suitable resource for a particular purpose. The term “colleague” indicates a professional relationship. Quantitative data from self-reported information on recommendations received Feedback forms, user surveys (print, online, email, telephone), evaluations of extended professional networks, if feasible Annually The decision to recommend a KM output reflects a user’s assessment of its quality, relevance, and value (which can be captured by indicators 26 and 27. Recommendations also provide evidence that user-driven sharing is exposing a wider professional network to the KM output. Frequent recommendations may speak to the overall success of the KM output. It may be useful to distinguish a recommendation from a referral. A referral may reflect a judgment of relevance, but it can be quite casual; the referrer may know little about the KM output beyond its topic. A recommendation implies a judgment of quality. Both recommendations and referrals are worth tracking, as they can indicate secondary distribution. In data collection instruments, “recommending” needs to be clearly defined and distinguished from simple “referral” or “sharing.” To approximately how many colleagues or co-workers have you recommended the [Web product] or its resources? (Fill in the blank.) _________colleagues 48 2013 Count, proportion Wednesday, September 6, 2017
29 Average pageviews per website visit Captures the number of times a web page is viewed, divided by the number of site visits This indicator measures the number of times a webpage is viewed, divided by the number of site visits. (See indicators 20 and 21 for definitions of pageviews and visits, respectively.) Quantitative data from web analytics Web analytics software , such as Google Analytics, Piwik, and WebTrends Quarterly The average pageviews per visit gauges the visitor’s engagement with a website. A high pageview average suggests that visitors interact more deeply with the site. There is no specific “good” or “poor” average; rather, the site’s or page’s context determines what is a satisfactory average. "Pageviews per visit" is becoming a less useful measure on its own. In the early days of the internet, users tended to start at a homepage and browse through a site, so higher pageviews-per-visit were desirable. In the 2010s, however, the trend shifted significantly. Most users now find a specific page in a search result or posted on social media, go to the site for that one page, and then leave. This is valid user behavior and does not indicate that a website is not meeting user needs. Different types of sites will naturally have different averages. For example, an online course that leads people through content a page at a time will have higher average pageviews-per-visit than a blog. Again, the most important thing to monitor is trends over time. Please give a specific example of how and what you have shared with your colleagues. (Open-ended.) For more information about Web analytics, see Appendix 3 on p.83. 48-49 2013 Count Wednesday, September 6, 2017 From January 1, 2013 to July 31, 2013, 2,606 page visits to the ICT for Ag website (ictforag.org) came from Africa, with an average of 3.15 pageviews per visit. During the month of December 2012, returning visitors to the Photoshare website (www.photoshare.org) viewed an average of 6.18 pageviews, while new visitors averaged 2.04 pageviews. Visitors to the DHS toolkit on www.k4health.org between November 1, 2012 and January 31, 2013 viewed an average of 2.72 pageviews per visit.
30 Average visit duration of website visit Measures the mean length of time for visits to a website, calculated as the difference between the times of a visitor’s first and last activity during the visit and averaged for all visitors This indicator measures the mean length of time for visits to a website, calculated as the difference between the times of a visitor’s first and last activity during the visit and averaged for all visitors. Quantitative data from web analytics Web analytics software, such as Google Analytics, Piwik, and WebTrends Quarterly The average amount of time that visitors spend on a site can be an overall indicator of quality. Longer visits might suggest that visitors interact more extensively with the website, which may mean they find it a rich source of relevant information and knowledge. It can also mean that they are having trouble finding what they are looking for or are experiencing long page-load times. As with most web analytics indicators, average visit duration is a relative measure. The trends over time and the context of a product or service can help with interpreting the data. When setting your goals for visit duration, consider what most people come to your site to do. Are they reading a single page? Are they searching through a database? Are they taking an online course? How long should it take them to complete that task? If the average visit is much shorter than a task estimate, seek further insight: Why do users leave before completing a task? For more information about Web analytics, see Appendix 3 on p.83. 49-50 2013 Count Wednesday, September 6, 2017 From January 01, 2012 to December 31, 2012, the average visit duration on www.popline.org was 2 minutes, 50 seconds. Visitors in Nigeria, however, spent an average of 6 minutes, 11 seconds on the site. The POPLINE-wide pages per visit figure for this time period was 13.68 compared to 23.72 pages per visit for Nigerian users. It may indicate that Nigerian users are engaging more than average POPLINE users or it takes longer to navigate the website due to poor internet connectivity in Nigeria. From October 01, 2012 to December 31, 2012, the average visit duration on www.k4health.org for visitors from North America was 2 minutes, 49 seconds; from Africa, 4 minutes, 39 seconds; and from Asia, 2 minutes, 16 seconds. It is true that slower internet connections can affect visit durations. In this example, a lower bounce rate (55% vs. 67%) and higher average pages per visit (2.94 vs. 2.52) for Africa indicate that African users are indeed more engaged than the average K4Health site visitor. Google Analytics also provides a number of site speed indicators, including average page load time. In this example, Asian visitors experienced the slowest average page load times, further supporting the assertion that African users are more highly engaged.
31 Number of citations of a journal article or other KM publication Measures the number of times a journal article or other KM publication is referenced in other information products This indicator measures the number of times a journal article or other KM publication, such as a book, guide, or white paper is referenced in other information products. The number of citations represents the instances when the article or KM publication was used as evidence, as back-up information, or supplementary knowledge in the development of another publication. Quantitative data from citation studies; Journal Citation Reports (Science Edition) or Journal Citation Reports (Social Sciences Edition) (Thompson Reuters http://thomsonreuters.com/journal-citation-reports/) Citation studies; web search engines; citation indexes Semiannually This indicator is a collective measure of the perceived authority, quality, and importance of a scientific publication in the research community. The number of citations reflects the popularity of the topic and importance of findings. A limitation of indicators based on citation counts is that they do not apply to all types of KM outputs, only to published scientific literature, where influence in the scientific community is a goal and a sign of success. For many other KM outputs, such as a database or course curriculum, influence in the scientific community is not a primary goal. In some instances, KM practitioners and authors in low- and middle-income countries may find this indicator not useful for them. Even when influence in the scientific community is a goal, authors in developing countries often face biases and other limitations that make it difficult for them to make their work known to others in the scientific community. A related limitation is that many relevant journals published in developing countries are not included/indexed in some widely used databases such as MEDLINE. Internet search engines, such as Google Scholar, can only provide partial information on the number of times a publication is cited online. Citation reports are costly, but easy to obtain from specialized services. 50-51 2013 Count Wednesday, September 6, 2017
32 Number/percentage of the intended users adapting a KM output Measures how many intended users adapt or alter a KM output to suit the user’s context This indicator measures how many intended users adapt or alter a KM output to suit the user’s context. “Adaptation” means the original KM output has been altered to suit the context of a specific set of users. Adaptation might entail translation (see indicator 33), simply changing terminology to locally used phrasing, or modifying artwork to depict a specific people or culture It could also involve changing the KM output to take into account local policy, resource availability, and cultural norms. Adaptations also can include transfer to another medium, modules for training, abridgments, and new, expanded, or updated editions, when these actions are taken by organizations or people other than the original producer of the KM output. Quantitative data from user self-reporting regarding adaptation, including identification of the KM output adapted; the purpose, extent, and nature of the adaptation; and the end results or outputs from adaptation, if known. User surveys (print, online, email, telephone), requests for permission to adapt the output, requests for technical assistance with adaptation, requests for funding to make changes and disseminate the revised product Semiannually This indicator gauges the extended life and increased relevance that an information resource may gain when adapted to meet local needs. In fact, research shows that guidelines, for example, are more effective when they are adapted to account for local circumstances (NHS Centre for Reviews and Dissemination, 1999). When adaptations are undertaken independent of the original producer, they become evidence of the adaptors’ judgment that the output will be useful enough in their setting to merit the effort and cost involved in adaptation and production. While documenting adaptations is useful, it is not possible to know if the number of adaptations is accurate, as a user may adapt a publication without notifying the original authors, publisher, or developers. Please indicate if you have adapted information from the [Web product] as follows. (Check all that apply.) o I have translated information from English into a local language. o I have adapted information to better fit the context I work in. o I have adapted complex information to make it simpler to use. o I have used content that I have adapted, or that has been adapted by others. Please give an example of how you have translated or adapted specific information from the [Web product] and used it in your work. (Open-ended.) 51 2013 Count, proportion Wednesday, September 6, 2017 A 2017 paper evaluating MSH's internal Technical Exchange Networks (TENs) documented a 14 percentage point increase in intended users adapting or translating technical content sent through the communities of practice. The indicators was adapted to collapse adaptation and translation.
33 Number/percentage of the intended users translating a KM output Measures how many intended users translate a KM output to suit the user’s context This indicator measures how many intended users translate a KM output to suit the user’s context. “Translation” is a type of adaptation that refers to rendering written texts from one language into another. The demand for translations reflects the requesters’ assessment that the KM output would be useful and relevant to their local setting. Quantitative data from user self-reporting regarding translation, including identification of KM output translated, purpose and extent of translation, end results or outputs from translation, if known. Self-reported user surveys (print, online, email, and telephone), requests to translate the product, requests for technical assistance with translation or funding to translate Semiannually Translation can expand the reach and usability of a KM output by making it accessible to those who do not read/speak the language in which the output was originally created. It may be most common to translate outputs into widely used languages; still, other language versions can be important, particularly if needs for certain information/knowledge are particularly great among specific populations or in specific regions. Please indicate if you have adapted information from the [Web product] as follows. (Check all that apply.) o I have translated information from English into a local language. o I have adapted information to better fit the context I work in. o I have adapted complex information to make it simpler to use. o I have used content that I have adapted, or that has been adapted by others. Please give an example of how you have translated or adapted specific information from the [Web product] and used it in your work. (Open-ended.) 51-52 2013 Count, proportion Wednesday, September 6, 2017 A 2017 paper evaluating MSH's internal Technical Exchange Networks (TENs) documented a 14 percentage point increase in intended users adapting or translating technical content sent through the communities of practice. The indicators was adapted to collapse adaptation and translation.
34 Number/percentage of intended users who report that a KM output provided new knowledge Measures the extent to which intended users report that they have learned from information and guidance presented in a KM output, and obtained new knowledge This indicator measures the extent to which intended users report that they have learned from information and guidance presented in a KM output, and as a result they have obtained new knowledge. This is the stage in the diffusion of innovation process when a person first becomes aware of the existence of information and guidance and gains understanding of how it functions (Rogers, 2003). The acquisition of knowledge may take place consciously or unconsciously when a person encounters new information, but it results in the ability of the user to make decisions and take action (Milton, 2005). Quantitative data from survey self-reporting Qualitative data from anecdotal user reports Feedback forms or audience surveys distributed with the KM output or after its dissemination or promotion; in-depth interviews (telephone or in-person) Annually Understanding the extent to which a KM output provides new knowledge to users can help publishers gauge what level of knowledge—basic, intermediate, advanced—should be used in key materials. Survey and interview questions can be designed to gauge whether members of intended audiences have learned something new that provides new knowledge relevant to their work. While, yes/no questions usually do not yield sufficient information, but they can be followed up with an open-ended request for the most important point learned and assessed. Please rate the following statements about whether your knowledge has been affected by the [Web product]. (1-Strongly disagree, 2- Disagree, 3-Not sure, 4-Agree, 5-Strongly agree) o It reinforced and validated what I already knew. o It provided me with information that was new to me and useful for my work. o I have already seen the information in a different resource. Please give a specific example of knowledge validated or gained. (Open-ended.) 55 2013 Count, proportion, qualitative Wednesday, September 6, 2017 Approximately 80% of family planning service providers (n=82) who filled out a bounce-back survey enclosed in the Family Planning: A Global Handbook for Providers indicated that the handbook provided them with new information on who can and cannot use specific family planning methods safely. The survey about the LeaderNet webinar on blended learning revealed that 96% of the 98 participants who responded to the final seminar evaluation (36% response rate) indicated that they acquired skills or knowledge from the seminar that they could apply to their work.
35 Number/percentage of intended users who report that a KM output reinforced or validated existing knowledge Measures the extent to which intended users feel that the information presented in KM outputs supports their previously acquired knowledge This indicator measures the extent to which uses feel that the information and experiential knowledge presented in KM outputs supports their previously acquired knowledge by reinforcing or validating it. Quantitative data from survey self-reporting; qualitative data from anecdotal user reports Feedback forms or user surveys distributed with the KM output or after its dissemination or promotion; in-depth interviews (telephone or in-person) Annually Reinforcement and validation can help to further transform health information and guidance into knowledge that is relevant and actionable for the user. It can also confirm the importance of the knowledge, reduce uncertainty, and increase the person’s confidence in continuing to use the knowledge. Validation is an important step in adopting and applying knowledge/innovation (Rekers, 2012). As with measurement of new knowledge acquisition (Indicator 34), in a cohort approach questions can be designed to gauge whether intended users have encountered any information or guidance that confirmed what they already knew. To obtain sufficient information, yes/no questions should be followed up with an open-ended request for respondents to provide specifics. "Please rate the following statements about whether your knowledge has been affected by the [Web product]. (1-Strongly disagree, 2- Disagree, 3-Not sure, 4-Agree, 5-Strongly agree) o It reinforced and validated what I already knew. o It provided me with information that was new to me and useful for my work. o I have already seen the information in a different resource. Please give a specific example of knowledge validated or gained. (Open-ended.)" 55-56 2013 Count, proportion, qualitative Wednesday, September 6, 2017
36 Number/percentage of intended users who can recall correct information about knowledge Measures the extent to which intended users can accurately recall the health information, lessons, and guidance offered by a KM output This indicator measures the extent to which members of intended audiences recall/remember health information, lessons, and guidance offered by a KM output and can recall the information or concepts accurately. Pre- and post-assessment data on knowledge about a particular subject matter; self-report surveys, which are most useful when conducted after the knowledge/information has been available for some time; anecdotal reports from intended users Pre- and post-assessment instruments on selected subject matter, such as multiple-choice or true/false knowledge quizzes or tests; feedback forms or audience surveys distributed with the KM output or after its dissemination or promotion; in-depth interviews (telephone or in-person) Annually, or baseline/endline Correctly recalling/remembering information suggests that a person paid enough attention to it to be able to remember it accurately later and/or it was presented in an appropriate way for learning and retention. Correct recall of information can be associated with effective knowledge development. It indicates an understanding of the knowledge or innovation, which may lead to better or more innovative application (Carneiro, 2000). As with Indicator 35, to obtain sufficient information, yes/no questions should be followed up with an open-ended request for respondents to provide specifics. 56 2013 Count, proportion, qualitative Wednesday, September 6, 2017
37 Number/percentage of intended users who are confident in using knowledge Measures the extent to which intended users think they have necessary skills and are capable of using knowledge This indicator measures the extent to which members of the intended audiences think they have the necessary skills, authority, and opportunity to act and feel capable of applying knowledge. Quantitative data from survey self-reporting; qualitative data from anecdotal user reports Feedback forms or user surveys distributed with the KM output or after its dissemination or promotion; in-depth interviews (telephone or in-person) Annually In order to address behavior change, researchers need to measure several key components of behavior change, especially an individual’s self-efficacy. Self-efficacy is a person’s confidence in their own ability to organize and execute actions to achieve desired goals (Bandura, 1986, 2006b). The availability of information, research findings, and lessons from others’ experiences can help build a person’s confidence to act. In addition to a simple statement about one’s confidence that can be answered by yes or no question, KM researchers can develop and use specific confidence/self-efficacy scales tailored to the particular domain of functioning that is the object of interest (Bandura, 2006a). Do you feel confident using knowledge validated or gained in your work? o Yes o No Comments: 56-57 2013 Count, proportion, qualitative Wednesday, September 6, 2017
38 Number/percentage of intended users who report that information/knowledge from a KM output changed/reinforced their views, opinions, or beliefs Measures the extent to which intended users report that their views, opinions, or beliefs were changed or strengthened by information in the KM output This indicator gauges the extent to which audiences’ views, attitudes, opinions, or beliefs changed or were strengthened as a result of information and knowledge presented in the KM output. Views and opinions are a favorable or unfavorable state of mind or feeling toward something. Beliefs are contentions that people accept as true or real. Quantitative data from survey self-reporting; qualitative data from anecdotal user reports User surveys distributed with the KM output or after its dissemination; in-depth interviews (telephone or in-person) Annually Questions about whether audiences changed their views or opinions due to a KM output can help reveal whether the content was internalized. The persuasion stage in behavior change occurs based on assessment of attributes such as relative advantage, compatibility, complexity, observability, and trialability (Rogers, 2003; Sullivan, 2010). People often, although not always, act in ways that are compatible with their views. Consequently, those who feel favorably toward a new concept or innovation are more likely to act on it and adopt new behaviors in the future. Like questions about knowledge gained, questions about views need to determine both what views or opinions changed or were reinforced and in what direction. Please rate the following statements about whether your views and ideas have been affected by the [Web product]. (1-Strongly disagree, 2- Disagree, 3-Not sure, 4-Agree, 5-Strongly agree) o It provided me with information that changed my views, opinions, or beliefs. o It provided me with a new idea or way of thinking. Please give a specific example of how the [Web product] changed your views or gave you new ideas (e.g., favorable or unfavorable). (Open-ended.) 57 2013 Count, proportion, qualitative Wednesday, September 6, 2017
39 Number/percentage of intended users who intend to use information and knowledge gained from a KM output Measures the extent to which intended users plan to use knowledge and information gained from KM outputs This indicator measures the extent to which intended audiences plan to use knowledge/information, such as guidance or concepts, gained from KM outputs. This is the stage in the diffusion of innovation process that includes a person's intention to seek additional information about new knowledge or an innovation (Rogers, 2003). Self-reported information from users on their intention to change behavior or practice based on information from a KM output, including identification of the output and the purpose, scope, and nature of the intended application User surveys distributed with the KM output or after its dissemination (online, mail), informal (unsolicited) feedback, in-depth interviews (telephone or in-person) Annually This indicator reflects a person's acceptance of new knowledge and intention to act on it—intention to use precedes use. Measuring intention to use is important because it gives an indication of potential future use. Once users are exposed to new knowledge, they may expect to use it in the future even if they have not done so yet. In addition to capturing intention at the initial data collection phase, it is a good practice in ongoing monitoring to check back with respondents later, if possible, to find out if their plans have been carried out. In addition to the more commonly used “quantity of use” indicator and the quality of and type of use indicators described in the Action sub-category, the “intent to use” indicator can provide evidence of potential success of a KM intervention. Success in KM can be defined as capturing the right knowledge and getting that knowledge to the right audience to improve organizational or professional performance. Intention to use a KM output suggests that it will be used when needed. Please indicate whether or not you plan on using information from the [Web product] for the following purposes, using the scale. (1-Definitely not, 2-Unlikely, 3-Not sure, 4- Probably, 5-Definitely) o To inform decision making o To improve practice guidelines, programs, and strategies o To improve training, education, or research o To inform public health policies and/or advocacy o To write reports/articles o To develop proposals 57-58 2013 Count, proportion, qualitative Wednesday, September 6, 2017
40 Number/percentage of intended users applying knowledge gained from a KM output to make decisions (organizational or personal) Measures the extent to which intended users apply information and knowledge from KM outputs to make decisions at both individual and organizational levels This indicator measures the extent to which the intended audience applies information/knowledge from KM outputs to decision-making processes. It can apply to work-related decisions at both individual and organizational and personal levels. Description of the information in the KM output that was used; approximate time frame of use; organization(s) involved; title, position, or role of person(s) involved; how users benefited or expect their clientele to benefit from applying the knowledge/innovation; description of the context of use; scope of application; and any further outcomes associated with use User surveys distributed with the KM output or after its dissemination; in-depth interviews (telephone or in-person) Annually This indicator examines how KM outputs, through their effect on users’ knowledge, affected their decision-making processes. Evaluators can ask those exposed to a KM output whether and how the information and knowledge presented by a KM output have affected their ability to make decisions. The data can be quantitative, such as the percentage of readers who made a decision based on the information, and qualitative, based on anecdotal information, such as what decisions did respondents make based on the information. One notable challenge with this indicator is that audiences may have difficulty recalling not only which information influenced their decision-making choices, but also which KM outputs provided that information. Please indicate whether or not you have used information from the [Web product] for the following purposes. (Select all that apply.) o To make management decisions (either personal or organizational) o To design or improve projects or programs o To develop or improve policy or national service delivery guidelines o To develop training programs or workshops o To assist in designing education materials o To guide research agenda or methods o To put research findings into practice o To promote best practices o To write reports/articles o To develop proposals o To increase public awareness o To increase my own knowledge o Other, please specify __________ Please give an example of how you have used specific information from the [Web product] in your work. (Open-ended.) Please rate the following statements about performance areas affected as a result of using the [Web product]: (1-Strongly disagree, 2- Disagree, 3-Not sure, 4-Agree, 5-Strongly agree) o Based on something I have learned in it, I have changed the way I perform my job. o I have used information from it to improve my skills. o It has helped me to be more competent and effective at my job. o It has helped me to perform my job more efficiently. o It has helped to improve the performance of my organization. Please give a specific example of how the [Web product] has improved your own performance or your organization’s performance. (Open-ended.) 58 2013 Count, proportion, qualitative Wednesday, September 6, 2017
41 Number/percentage of intended users applying knowledge gained from a KM output to improve program, service delivery, training/education, or research practice Measures the extent to which intended users apply knowledge to improve practice guidelines, program design and management, training curricula, or research practice This indicator measures the extent of the use, and the outcomes of the use, of knowledge gained from KM outputs to improve practice guidelines, program design and management, training curricula, or research practice, resulting in better service delivery, more efficient programs, better training and education of health care personnel, or stronger research designs. Description of knowledge from KM outputs that was used, approximate timeframe of use, organization(s) involved, how programs or practice benefited from applying the information, and any further outcomes associated with use To obtain a quantitative data, evaluators can count the instances of use of knowledge gained from a KM product or group of products. Alternatively, evaluators can calculate the percentage of respondents to a survey who said that they used knowledge gained from the KM product. User surveys (online, mail, telephone), usually distributed after the product has been disseminated; informal (unsolicited) feedback; in-depth interviews (telephone or in-person); guidelines or protocols referencing or incorporating information/knowledge from KM outputs Annually The purpose of this indicator is to trace how knowledge has been specifically used to enhance practice, programs, training, education, or research. One difficulty with measuring effect on practice is that audiences may not recall which particular piece of knowledge gained from what specific KM output was used and how it contributed to a defined outcome, particularly in a case-control approach, which begins with a change in practice and looks for factors that contributed to the change. The information in national guidelines is more likely to be adopted when it is disseminated through educational or training interventions than when guidelines are simply distributed in their original written form (NHS Centre for Reviews and Dissemination, 1999). When training and information resources are necessary components of the trainee’s education or where training is necessary to use an information resource, the training and the information resources constitute a package that should be evaluated as a whole. Anecdotal reports on use are valuable, particularly given the challenge of capturing and quantifying the use of information and outcomes of its use. It is helpful to collect in-depth stories from users of products or services, including reports on improvements, achievements, or problems that result from using a product or service. Please indicate whether or not you have used information from the [Web product] for the following purposes. (Select all that apply.) o To make management decisions (either personal or organizational) o To design or improve projects or programs o To develop or improve policy or national service delivery guidelines o To develop training programs or workshops o To assist in designing education materials o To guide research agenda or methods o To put research findings into practice o To promote best practices o To write reports/articles o To develop proposals o To increase public awareness o To increase my own knowledge o Other, please specify __________ Please give an example of how you have used specific information from the [Web product] in your work. (Open-ended.) Please rate the following statements about performance areas affected as a result of using the [Web product]: (1-Strongly disagree, 2- Disagree, 3-Not sure, 4-Agree, 5-Strongly agree) o Based on something I have learned in it, I have changed the way I perform my job. o I have used information from it to improve my skills. o It has helped me to be more competent and effective at my job. o It has helped me to perform my job more efficiently. o It has helped to improve the performance of my organization. Please give a specific example of how the [Web product] has improved your own performance or your organization’s performance. (Open-ended.) 58-59 2013 Count, proportion, qualitative Wednesday, September 6, 2017 In the 2011 K4Health website users’ online survey, majorities of respondents (n=224) used the information obtained from the K4Health website to improve their knowledge (72%), to design or improve projects or programs (55%), and to promote best practices (52%). In the survey about the LeaderNet webinar on blended learning, when asked for examples of how they applied or plan to apply their new knowledge to their work, participants stated they will apply the ADDIE model (consisting of 5 phases—analysis, design, development, implementation, and evaluation), set SMART objectives (consisting of 5 criteria—specific, measurable, attainable, relevant, and time-bound), thoroughly analyze the target audience, measure learning interventions beyond Kirkpatrick’s Level 1 and 2 (reaction and learning), apply blended learning strategies to their current learning challenges, and engage in Global Health eLearning courses.
42 Number/percentage of intended users applying knowledge gained from a KM output to inform policy Measures the extent to which intended users apply knowledge either to change or enhance existing policies or to develop new policies at any level of the health system This indicator measures the use of knowledge gained from KM outputs in policy formulation and the outcomes of that use. It covers efforts either to change or enhance existing policies or to develop new policies—at any level of the health system. Policies both reflect and affect the public interest and are considered keystones or necessary tools in making public health improvements. Self-reported information from audiences using the knowledge to inform policy; description of knowledge from a KM output used, approximate time frame of use, organization(s) involved, how policy formulation benefited from applying the knowledge, and any further outcomes associated with applying the knowledge Audience surveys (online, mail, telephone), usually distributed after the product has been disseminated; informal (unsolicited) feedback; in-depth interviews (telephone or in-person); copies of policies referencing, incorporating, or shaped by information/knowledge from KM outputs Annually Like the previous indicator on practice (indicator 41), the number of instances of use of knowledge gained from a KM product or group of products to inform policy can provide a quantitative assessment. Alternatively, evaluators can calculate the percentage of respondents to a survey who said that they used the knowledge gained from the KM product to shape policy. For more insight, it is important to follow up with an open-ended request for specifics. Evaluators can then create a case-study summary of the collected anecdotal evidence. Methodological challenges involved in measuring the role of knowledge in policy formulation include the often competing or reinforcing influences of other external forces or conditions, appropriate attribution, the long timeframe needed for changes to occur, shifting strategies and milestones, and policy-maker capacity and engagement (Reisman et al., 2007). It may not be easy for respondents to recall which particular knowledge gained from which specific KM output was used and how it contributed to the policy. Please indicate whether or not you have used information from the [Web product] for the following purposes. (Select all that apply.) o To make management decisions (either personal or organizational) o To design or improve projects or programs o To develop or improve policy or national service delivery guidelines o To develop training programs or workshops o To assist in designing education materials o To guide research agenda or methods o To put research findings into practice o To promote best practices o To write reports/articles o To develop proposals o To increase public awareness o To increase my own knowledge o Other, please specify __________ Please give an example of how you have used specific information from the [Web product] in your work. (Open-ended.) Please rate the following statements about performance areas affected as a result of using the [Web product]: (1-Strongly disagree, 2- Disagree, 3-Not sure, 4-Agree, 5-Strongly agree) o Based on something I have learned in it, I have changed the way I perform my job. o I have used information from it to improve my skills. o It has helped me to be more competent and effective at my job. o It has helped me to perform my job more efficiently. o It has helped to improve the performance of my organization. Please give a specific example of how the [Web product] has improved your own performance or your organization’s performance. (Open-ended.) 60 2013 Count, proportion, qualitative Wednesday, September 6, 2017
43 Number of operational guidelines developed and adopted to facilitate partnership activities, by type Refers to the number and type of guidelines, instructions, plans, or any other formal documentations that are developed and adopted to facilitate the operation and implementation of activities This indicator refers to the number and type of guidelines, instructions, plans, or any other formal documentations that are developed to facilitate the operation and implementation of activities in a partnership and are adopted by all of the partner organizations. Operational guidelines may include a memorandum of understanding (MOU), annual work plan and budget, performance management plan, communications plan, or progress report. Quantitative data and qualitative data from programmatic records Administrative/programmatic records, operational guideline documents Annually or semiannually The purpose of this indicator is to ensure that a set of rules and expectations for each partner organization and the mechanism for operating the partnership are clearly set. It provides objective measures for the coordination of policies or programs, evidence of communication, and coordination of activities. In addition to documenting the number and types of operational guidelines, it is important to track how the partnership agreements are implemented and monitored, and ensure that each partnership organization understands it rights, roles, and responsibilities. Findings collected from this indicator can be further confirmed by indicators 44 (leadership and management) and 45 (shared vision). 2017 Count, qualitative Wednesday, December 13, 2017
44 Rating of the coordination roles and responsibilities undertaken by the leadership and management body in the partnership Measures the extent to which the leadership and management body, such as an advisory group or steering committee, guide and coordinate the work of the partnership and the perceptions of that leadership This indicator measures the extent to which the leadership and management body, such as an advisory group or steering committee, guide and coordinate the work of the partnership. For example, the performance criteria include that the leadership and management body: · understands and supports KM as key to the partnership success, such as establishing a KM strategy or using KM tools/techniques; · promotes partnership vision and identity; · encourages active participation by partner organizations; · shares accountability for achieving partnership goals; · has a clear and transparent governance structure to make mutually beneficial decisions; and · uses participatory processes to develop scopes of work and joint activities. ​Qualitative and quantitative data from responses to questionnaires ​(using Likert scales) ​regarding ​the perceptions of partner organizations about the performance quality and characteristics of the leadership and management body Periodic surveys, followed up with key informant interviews and focus groups as needed; checklist to measure governance, accountability, and so on, as specified in a collaboration agreement Periodically (before, during, and after specific activities or events) For the purpose of forming and sustaining a partnership, it is crucial to have a leadership and management structure that meets performance criteria identified by partner organizations. This indicator aims to periodically collect the data using various performance criteria, including the support to KM, to gauge how well the leadership and management body is operating. The indicator also helps the leadership and management body to assess its strengths and areas for improvement and ensure that the partnership continues beyond personnel changes. The term “partnership” implies an equal relationship, and in order for a partnership to succeed from the start and over time, it requires that partner organizations are willing to let go of some of their own power and control (IOD PARC, 2015; Harris & Wilkins, 2013). Therefore, it is essential for all parties to be involved in the establishment of clear governance and accountability structures, and use performance monitoring indicators for ongoing planning, documentation of progress, and reflection, revision and transformation of the appropriate leadership and management practices (ADB, 2010; IOD PARC, 2015). Further research is required to develop objective measures of these areas including governance, accountability, and performance monitoring. 2017 Categorical scale, qualitative Wednesday, December 13, 2017
45 Level of commitment and support for shared vision Measures the extent to which the partnership vision is jointly created, shared, and understood by member organizations and the perceptions of that leadership This indicator measures the extent to which the partnership vision is jointly created, shared, and understood by partner organizations. For example, the quality criteria include that the shared vision: · contributes to knowledge sharing and use among partner organizations and between the partnership and its audiences, · builds an identity for the partnership, · addresses the common needs of the partnership, · aligns with goals of partner organizations, and · guides concrete actions and joint activities including the planning and production of KM outputs. ​Qualitative and quantitative data from questionnaires ​(using Likert scales) ​regarding ​the perceptions of partner organizations about the shared vision measurement, using dimensions that are agreed upon for the partnership Periodic surveys, followed up with key informant interviews and focus group discussions, as needed Periodically (before, during, and after specific activities or events) This indicator aims to facilitate the planning, implementation, and monitoring of the shared vision set by partner organizations in their efforts to improve health and development outcomes via KM. Partnerships must be guided by a shared vision that builds trust and recognizes the value and contribution of all partner organizations. The understanding and acceptance of the importance of the shared vision leads to improved coordination of policies, programs, and service delivery (C. C. Fund, 2010). Although partners may believe they have a shared vision based on a cohesive set of common goals and a mutual understanding, to work together, each organization needs to understand how its own culture and practices impact and influence the relationship (IOD PARC 2015). In developing a shared vision, it is crucial for each organization to understand the specific norms, values, and approaches of other partners (Harris & Wilkins, 2013). When applicable, measuring the perception of “acceptance of differences among partner organizations” as one of the criteria to measure the level of commitment and support for the shared vision may be useful. 2017 Categorical scale, qualitative Wednesday, December 13, 2017
46 Level of trust among partner organizations Measures each partner organization’s level of confidence in and willingness to open oneself to the other This indicator refers to each partner organization’s level of confidence in and willingness to open oneself or one’s organization to the other, based on the following dimensions: · integrity – each organization is fair and just; · dependability – each organization will do what it says it will do; · competence – each organization has the ability to do what it says it will do; · credibility – each organization is well-respected among its respective audiences; and · risk management – each organization manages and mitigates potential common risks that may include shortage of resources or departure of key members. ​Qualitative and quantitative data from questionnaires ​(using Likert scales) ​regarding ​the perceptions of partner organizations about the trust measurement, using dimensions that are agreed upon for the partnership Periodic surveys, followed up with key informant interviews and focus group discussions, as needed Periodically (before, during, and after specific activities or events) This indicator aims to look closely at various dimensions to characterize the quality of relationship supported by mutual trust, which is one of the key elements of facilitating sound decision making approaches and building successful partnerships. Trust has been used as a key measurable components of relationships (Ki & Hon, 2007). Trust has been a widely studied concept as a component of the quality of relationships, and some researchers have identified and used three dimensions of trust that are measurable, including integrity, dependability, and competence (Ki & Hon, 2007; Paine, 2013). Other studies have shown that credibility and risk management are also key dimensions in trust measurement (Lister, 1997; Lee, 2001; ADB, 2011). Having data on these dimensions will help partnerships identify areas for improvement and become more authentic and transparent (Paine, 2013). 2017 Categorical scale, qualitative Wednesday, December 13, 2017
47 Level of satisfaction with the overall partnership Measures each partner organization’s level of favorable feeling toward the partnership because needs and expectations are positively reinforced This indicator refers to each partner organization’s level of favorable feeling toward the others and the partnership as a whole because needs and expectations related to the partnership are positively reinforced (Hon & Grunig, 1999). Satisfaction dimensions to ensure a positive relationship include the following: · collaboration – each organization works together to achieve the performance expectations set by the partnership; · complementarity – each organization selects skilled and committed staff with complementary skills and knowledge to serve as team members; · contribution – each organization provides resources and knowledge to design, manage, and monitor joint activities; · coverage – each organization helps to find and reach new audiences through partnerships. Qualitative and quantitative data from questionnaires ​(using Likert scales) ​regarding ​the perceptions of partner organizations about the satisfaction measurement using dimensions that are agreed upon for the partnership Periodic surveys, followed up with key informant interviews and focus group discussions, as needed Periodically (before, during, and after specific activities or events) Partnership satisfaction, along with trust, is a key and fundamental indicator for measuring and maintaining a positive relationship among organizations in a partnership. Satisfying relationships produce more benefits than costs, and the concept of success is determined, in part, by how well the partnership achieves performance expectations set by partner organizations (Mohr & Spekman, 1994; Paine, 2013). A partnership that generates satisfaction exists when performance expectations have been jointly achieved (Mohr & Spekman, 1994). As part of the satisfaction measurement, it is important to look at the multiple dimensions proposed for this indicator, including collaboration, complementarity, contribution, and coverage. To fully support each other’s work, partnership activities need to be integrated into the work of the organization and not considered to be “extracurricular” (King, 2014). This indicator focuses on the level of satisfaction with the overall partnership, rather than assessing each organization individually, because the latter approach may introduce bias as people may not feel confident judging others willingly and/or accurately. 2017 Categorical scale, qualitative Wednesday, December 13, 2017
48 Number of joint activities to produce KM outputs, by type Measures the number of both new and continued activities that are jointly implemented to produce KM outputs for intended audiences

This indicator refers to the number of both new and continued activities that are collectively implemented to produce KM outputs, such as products and services, publications and resources, training and events, and approaches and techniques, for intended audiences.

Self-report of number of activities to produce KM outputs, by type; complementary data: self-report of number of KM outputs jointly produced, by type

Administrative records and programmatic records, including planning/design records, qualitative analyses of changes in quality of products related to local relevance, accuracy, compelling design, and clearer writing

Semiannually The purpose of this indicator is to ensure that each organization in a partnership is actively engaged in and contributing to activities to produce a variety of KM outputs. For example, some partnerships may create websites or use social media channels to increase awareness, share knowledge, or call for action, while others may focus on documentation of lessons learned in a form of newsletter, case studies, or reports. Partnerships often allocate responsibility for components of the task to different partner organizations; for example, one organization manages the website and another organization produces newsletter. It is important to systematically track those activities to gauge how well the partnership is integrating KM into its work. In addition to just counting the joint activities to produce and maintain KM outputs, additional data should be kept for each of the activities, for example, scope/focus, duration/frequency, intended audience, and so on. Creating a simple spreadsheet would be useful to document and organize the information about partnerships in terms of activities. 2017 Wednesday, December 13, 2017
49 Number/percentage of partner organizations learning new and valuable information/knowledge produced from partnership activities, by type Measures the extent to which partner organizations and their audiences report that they have learned about knowledge jointly produced from partnership activities This indicator measures the extent to which partner organizations and their audiences report that they have become aware of and are learning from knowledge jointly produced from partnership activities and feel capable of applying knowledge in their work. This indicator focuses on the value-generating type of knowledge, or expertise, that enables them to achieve their partnership goals and objectives (ADB, 2011). Quantitative data from self-reporting survey; qualitative data from anecdotal user reports Periodic surveys, followed by key informant interviews and focus groups, as needed Annually The purpose of this indicator is to systematically document learning opportunities supported in the partnership through partnership activities and KM outputs, particularly partner organizations producing value-generating types of knowledge, such as guidelines, lessons learned, or promising practices on technical topics. For example, High Impact Practices (HIPs) in Family Planning, which are a set of evidence-based family planning practices vetted by experts against specific criteria and documented in an easy-to-use format, provide specific value-generating types of knowledge. Value-generating types of knowledge are specifically relevant to KM and partnerships, and generally fall under three categories of knowledge areas: 1) sector/thematic, 2) research, and 3) operational (ADB, 2011): · sector and thematic knowledge – largely tacit, but can be made into know-how explicit through meetings, publications, and other mechanisms; · research knowledge – primarily published and, therefore, explicit but may also include tacit research know-how in specific subject areas and research methods, which should be distinguished from the explicit nature of basic health science research; · operational – primarily explicit know-how about the organizational framework, examples include operational policies, procedures, instructions, and processes. It is useful to link the learning of new knowledge among partner organizations to these categories, and assess in which areas partnership activities are adding particular values in achieving partnership goals and objectives. 2017 Count, proportion, qualitative Wednesday, December 13, 2017
50 Number/percentage of partner organizations using information/knowledge produced from partnership activities Measures the extent to which partner organizations apply knowledge gained from partnership activities This indicator measures the extent to which partner organizations apply knowledge gained from partnership activities. Related to indicators 40 to 42 to measure actions regarding making decisions (organizational or personal), improving practice, or informing policy. Quantitative data from self-reporting survey; qualitative data from anecdotal user reports Periodic surveys, followed by key informant interviews and focus groups, as needed Annually The purpose of this indicator is to trace how knowledge has been used by partner organizations for specific purposes, and how each organization has benefitted from that knowledge. This may include the use of knowledge by their intended audiences, such as policy makers, program managers, and service providers. To examine use of knowledge, and outcomes stemming from the use of knowledge, data can be collected by asking partner organizations themselves or observing their actions, when applicable. There are two main levels of inquiry: 1) the specific knowledge used (a countable item) and 2) the impact of the knowledge use (a qualitative appreciation of how the new knowledge affected the reporting partner). It is useful to link use of new knowledge among partner organizations to the three categories: 1) technical/sector/thematic, 2) research, and 3) operational. Asking those who have been exposed to knowledge if they have applied it, how they have applied it, and what effect it had is relatively straightforward, however observing the use of knowledge and outcomes related to its use in real time is much more challenging. 2017 Count, proportion, qualitative Wednesday, December 13, 2017
51 Number of approaches developed, adapted, and/or adopted to facilitate adaptive management of a project, program, or initiative, by type Refers to the number and variety of approaches that have been developed, adapted, and/or adopted to facilitate iterative approaches to learning and adapting while implementing a project, program, or initiative This indicator refers to the number and type of approaches that have been developed, adapted, and/or adopted to facilitate iterative approaches to learning and adapting while implementing a project, program, or initiative to meet performance objectives. This may include the development of new and/or revision, adaptation, or adoption of existing approaches to help foster adaptive management. Quantitative or qualitative data from programmatic records Administrative/programmatic records Annually, after work planning This indicator reflects the planning a project, program, or initiative may take to prepare for and foster adaptive management. Using, adapting, or developing a wide range of approaches can signify a commitment by management to the importance of programmatic flexibility and change that best works for a specific team, project, program, or initiative. Projects, programs, and initiatives must determine the appropriate mix and number of approaches; more approaches is not necessarily better. Although this indicator measures the number of approaches, this should not suggest that it is better to use more approaches. The intent is to measure that there are intentionally selected approaches to facilitate adaptive management. Although a project, program, or initiative may include adaptive management approaches in its work plan, it does not necessarily reflect the use of those sessions in decision making. Indicators in the Reflect subcategory address this common issue and challenge with apply adaptive practices. 2017 Count Wednesday, December 13, 2017
52 Leadership and staff support for adaptive practices Refers to the extent to which leaders and staff demonstrate support for the adaptive management of a project, program, or initiative This indicator refers to the extent to which leaders and staff of a project, program, or initiative support iterative approaches to learning and adapting. This may be self-reported or analyzed as a group or team, and may include awareness of the importance of adaptive practice. Qualitative and quantitative data from responses to questionnaires ​(using Likert scales) ​regarding ​the degree of support from leaders and staff in the use of iterative and adaptive approaches. Periodic surveys Annually, or after specific activities This indicator reflects the perception of staff and leadership in the utility of adaptive practices. Leadership that reinforces adaptive practices is a critical element of adaptive management of a project. Self-reported data may be biased or may not empirically represent the context or practice. On a scale of 1-5, 1 being not a lot, and 5 being a lot, how do you perceive leaders of this project, program, initiative to support the use of adaptive practices for managing the project, program, or initiative? 2017 Categorical scale, qualitative Wednesday, December 13, 2017
53 Number of training sessions or activities focused on adaptive practices that were preplanned in a project, program, or initiative work plan Refers to the number of training sessions or activities focused on iterative approaches to learning and adapting that were preplanned in a project, program, or initiative workplan This indicator refers to the number of training sessions or activities focused on iterative approaches to learning and adapting that were preplanned in a project, program, or initiative work plan. This may include trainings, workshops, and learning events, including knowledge exchange events that are preplanned and budgeted to support the adaptive management of a project, program, or initiative. Quantitative data from programmatic records Administrative/programmatic records Annually, after work planning This indicator reflects the intentional use of adaptive management within a project, program, or initiative. Intentionally identifying adaptive management approaches in a work plan can signify a commitment by management to the importance of programmatic flexibility and change. Although a project, program, or initiative may include adaptive management approaches in its work plan, it does not necessarily reflect the use of those sessions in decision making. 2017 Count Wednesday, December 13, 2017
54 Number of training sessions or activities focused on adaptive practices Refers to the number of training sessions or activities delivered to increase awareness, understanding, or capacity in iterative approaches to learning and adapting This indicator refers to the number of training sessions or activities delivered to increase awareness, understanding, or capacity in iterative approaches to learning and adapting among staff in a project, program, or initiative. Training sessions or activities may be delivered by internal or external experts on topics such as adaptive management; monitoring, evaluation, and learning; complexity-aware programming; facilitation techniques; and collaborating, learning, and adapting. Quantitative data from programmatic records Administrative/programmatic records Annually, after work planning This indicator reflects the need to both train staff in projects, programs, or initiatives in adaptive management and support the implementation of adaptive practices. Without awareness, understanding, capacity, and time, adaptive practices remain ad hoc. Carefully planning and monitoring training and activities can signify a commitment by management to the importance of programmatic flexibility and change. Although a project, program, or initiative may include adaptive management approaches in its work plan, it does not necessarily reflect the use of those sessions in decision making. Although training sessions may be provided, it does not guarantee that the participants were able to internalize the learning and facilitate or use the training materials in the future. 2017 Count Wednesday, December 13, 2017
55 Number of people trained in adaptive practicess Refers to the number of staff trained in iterative approaches to learning and adapting This indicator refers to the number of staff trained in iterative approaches to learning and adapting to improve projects, programs, or initiatives. Quantitative data from administrative records or reports that provide the number of participants, characteristics of participants, gender, and other relevant information Administrative records and reports After specific activities This indicator tracks the initial reach of adaptive practices. It is a simple way to establish a foundation of staff trained in programmatic flexibility and change. Although a project, program, or initiative may include adaptive management approaches in its work plan, it does not necessarily reflect the use of those sessions in decision making. Although training sessions may be provided, it does not guarantee that the participants were able to internalize the learning and facilitate or use the training materials in the future. 2017 Count Wednesday, December 13, 2017
56 Percentage of target staff reporting an improvement in capacity to use adaptive practices Refers to the percentage of target staff reporting an improvement in capacity (knowledge, skills, or abilities) to use adaptive practices This indicator refers to the percentage of target staff reporting an improvement in capacity (knowledge, skills, or abilities) to use adaptive practices for the management of a project, program, or initiative as a results of participating in training or other activities aimed at building capacity in adaptive management. Target staff may include partners. Self-reporting, pre- and post-evaluations, or follow-up surveys should be conducted to determine the extent to which there was an improvement in awareness, understanding, or capacity in iterative approaches to learning and adapting. Quantitative data from pre- and post-tests using survey questions and Likert scales to determine capacity to use adaptive practices and follow-up assessments at three and/or six months to determine knowledge retention; qualitative data can provide greater insight into target user capacity Pre- and post-tests, follow-up surveys Quarterly, semiannually, or after specific activities This indicator can be used to monitor changes in capacity (awareness, knowledge, and skills) in adaptive practices over time (before training/activity and after training/activity). It is a simple way to establish a foundation of staff trained in programmatic flexibility and change. Although a project, program, or initiative may include adaptive management approaches in their work plan, it does not necessarily reflect the use of those sessions in decision making. Self-reported data may be biased and may not empirically represent the context or practice. 2017 Proportion Wednesday, December 13, 2017
57 Number of approaches, methods, tools, or events implemented for reflection and other adaptive practices Refers to the number of adaptive practices (approaches, methods, tools, or events) used to facilitate the adaptive management of a project, program, or initiative This indicator refers to the number of specific adaptive practices, including approaches, methods, tools, or events used to facilitate the adaptive management of a project, program, or initiative. This may include the number of in-person learning events, after-action reviews, lesson-learned workshops, communities of practice, new technologies that facilitate increased ease and frequency of interaction, or other related iterative approaches to learning and adapting. Although this indicator measures the number of approaches, this should not suggest that it is better to use more approaches. The intent is to measure the intentionally selected approaches used to facilitate adaptive management. Quantitative or qualitative data from programmatic records, or self-report of number of adaptive practices conducted, by type; qualitative data to provide greater insight into actual use by staff Administrative records and reports, self-report surveys Quarterly, semiannually This indicator reports the actual implementation of the planned adaptive practices that were identified to be used in the project, program, or initiative. Although a project, program, or initiative has conducted adaptive management approaches, and followed through on their identified work plan activities, it does not necessarily mean that the sessions were of high quality or contributed to programmatic improvements. Projects, programs, and initiatives may find it more useful to measure the proportion of staff using adaptive approaches, however, because the number staff may change overtime (expand and contract) it may be difficult to measure over time. 2017 Count Wednesday, December 13, 2017
58 Number of sessions or activities that include analysis of and/or reflection on monitoring data Refers to the number of sessions or activities focused on reflection and analysis of monitoring data from a project, program, or initiative to inform performance and adjustments This indicator refers to the number of sessions or activities focused on reflection and analysis of monitoring data from a project, program, or initiative to inform performance and adjustments. This may include modifying results reviews, data-quality assessments, or other monitoring and evaluating activities implemented for accountability and to inform decision making to include more learning and reflection. Quantitative data from administrative records or reports that provide the number of sessions or activities implemented with a focus on reflection and analysis of monitoring data Administrative records and reports Quarterly, semiannually, or after specific activities This indicator reports on the implementation of planned adaptive practices identified to be used in the project, program, or initiative that specifically used routine or other monitoring data collected by the project, program, or initiative. Although a project, program, or initiative has conducted adaptive management approaches, and followed through on their identified work plan activities, it does not necessarily mean that the sessions were of high quality or contributed to programmatic improvements. Quality can be assessed through user satisfaction surveys (see indicators 24 to 28); actions taken to make programmatic improvements can be assessed in an internal assessment. 2017 Count Wednesday, December 13, 2017
59 Number of actionable recommendations identified or collected to inform project, program, or initiative performance Refers to the number of actionable recommendations to inform project, program, or initiative performance or adjustments that were collected from the use of adaptive practices This indicator refers to the number of actionable recommendations to inform project, program, or initiative performance or adjustments that were identified or collected from the use of adaptive practices or from sessions or activities focused on reflection and analysis of monitoring data. The number can be used to calculate a percentage of action taken. Quantitative and qualitative data from the review and analysis of meeting minutes, reports, and other documentation to determine how many recommendations are actionable. Administrative records and reports Quarterly, semiannually, or after specific activities This indicator reports the number of recommendations collected that are actionable. Recommendations should identify a point of contact or timeframe for its use, rather than be a general statement that provides no next steps. This indicator should help staff to reflect on whether the stated recommendations can be used, by whom, and by when. By collecting the number of actionable recommendations, a percentage of actions taken can be calculated during an internal assessment (see Act subcategory). Although a project, program, or initiative has conducted adaptive management approaches, and followed through on their identified work plan activities, it does not necessarily mean that the sessions were of high quality or contributed to programmatic improvements. Review and analysis of documentation can help determine how many recommendations are actionable, such as actions taken within the scope of the project, program, or initiative; responsibility and next steps are clearly documented; and budget and time allocated, and so on. Teams can determine how best to define "actionable" to meet the project, program, and/or initiative needs. 2017 Count Wednesday, December 13, 2017
60 Percentage of intended users who are satisfied with trainings, approaches, or events focused on adaptive practices Refers to the percentage of intended users who are satisfied with trainings, approaches, or events that focus on or promote adaptive practices This indicator measures the percentage of intended users who are satisfied with trainings, approaches, or events that focus on or promote the management of a project, program, or initiative through adaptive practices. A satisfied user indicates that intended needs and expectations were met. User feedback should inform future activities. Multiple data points will monitor change in user satisfaction over time. Quantitative data from self-reported surveys or questionnaires using Likert scales to determine user satisfaction with trainings, approaches, or events focused on adaptive practices; qualitative data can provide greater insight into user experience, attitudes, and preferences Surveys Semiannually or after specific activities The aim of this indicator is to gauge user satisfaction with the trainings, approaches, and events that were selected for adaptive practices purposes in the project, program, or initiative. This indicator does not measure the quality of the trainings, approaches, or events, as it looks at self-reported satisfaction. It is possible for staff to be highly satisfied with a training or event, but for that training or event not have impact on the improvement of the project, program, initiative, or decision-making processes. 2017 Categorical scale, proportion, qualitative Wednesday, December 13, 2017
61 User rating of usefulness of content/outputs produced from the use of adaptive practices Measures the perceived quality and/or relevance of content/outputs produced from the use of adaptive practices to inform the management of a project, program, and/or initiative User rating of quality and/or relevance of content/outputs produced from the use of adaptive practices to inform the management of a project, program, and/or initiative. “Content” refers to the outputs, such as action steps, that emerge from a training, workshop, session, or event. "Relevance” refers to whether intended users find the information or knowledge applicable and important to their work. Multiple data points are used to monitor change in user reporting of the usefulness of adaptive practices over time. Quantitative data from self-reported surveys or questionnaire using Likert scales to determine the quality or relevance of participating in activities intended to training in or use adaptive practices for program implementation Surveys, after action reviews Semiannually or after specific activities This indicator measures the user rating of the quality of the outputs from adaptive practices used in the project, program, or initiative. It is a way to measure the usefulness or quality of the practices. This indicator does not measure the quality of the trainings, approaches, or events, as it looks at self-reported usefulness. It is possible for staff to believe that the practices/outputs were of high quality or relevant, but for those practice/outputs not to have impact on the improvement of the project, program, initiative, or decision-making process. Qualitative data can provide greater insight into user experience, attitudes, and preferences and how quality can be improved. 2017 Categorical scale, qualitative Wednesday, December 13, 2017
62 Number of instances when target users report that their projects reused or adapted previously captured knowledge/resources to design or start a project, program, and/or initiative Refers to the use of previously captured knowledge for decision making while designing or starting a project, program, or initiative This indicator refers to the use of previously captured tacit and explicit knowledge for decision making while designing or starting an activity, such as a project, program, or initiative. This may include accessing previous after-action reviews, reviewing previous project documentation, conducting a peer assist with actors previously engaged in similar work, or conducting before-action reviews with the team. It can apply to work-related decisions and practices at project, program, initiative, and individual levels. Anecdotal evidence may be included regarding whether a consultation has occurred or resulted in actionable recommendations. Quantitative and qualitative data from key informant interviews or focus group discussions, used to determine when previously captured knowledge or resources were use, reused, or adapted for a specific activity Key informant interviews, focus group discussions, after-action reviews Semiannually or after specific activities This indicator measures actions taken to put reflection and knowledge into practice. By asking open-ended questions, the indicator captures not just the number of instances of the material being used, but also the way it was used, how it was adapted, and how it was useful in the design or start-up of a project, program, or initiative. This indicator addresses a common challenge in adaptive management—taking the time to use previously captured knowledge and resources. This number can be used to calculate a percentage. Finding time for discussion and analysis can be challenging for a project, program, or initiative. This indicator is mostly used to evaluate the adaptive practices selected, not the program itself, which can result in a low prioritization of this indicator. Qualitative data collection can provide additional insights on the meaning of this indicator, and is strongly recommended because of potentially divergent points of view of what constitutes use, reuse, adaptation, and/or action. 2017 Binary (y/n), count, qualitative Wednesday, December 13, 2017
63 ​​Number of instances when target users report the application of knowledge captured through reflection to inform decisions or to take corrective action Refers to the use of knowledge in decision making while taking corrective action, or attempting to improve a project, program, or initiative This indicator refers to the use of tacit and explicit knowledge in decision making while taking corrective action or attempting to improve an activity, such as a project, program, or initiative. It measures the use of actions or outputs from reflective learning sessions in decision making or practice, which may include work planning, requests for technical assistance, modifications in project implementation or management activities, data review meetings, or identification of additional project or program needs. It can apply to work-related decisions and practice at the project, program, initiative, or individual level. Quantitative and qualitative data from key informant interviews or focus group discussions to determine when decisions were made based on knowledge, information, or insights captured during reflection Key informant interviews, focus group discussions, after action reviews Semiannually or after specific activities The aim of this indicator is to measure actions taken to put reflection and knowledge into practice within the project, program, or initiative. By asking open-ended questions, the indicator captures not just the number of instances that knowledge was used to inform decisions or take corrective action, but also the way it was used, how it was adapted, and how it was useful. Finding time for discussion and analysis can be challenging for a project, program, or initiative. This indicator is mostly used to evaluate the adaptive practices selected, not the program itself, which can result in a low prioritization of this indicator. Measuring this indicator should be carefully considered as it could require considerable level of effort. Whether projects will want/need to measure this is debatable, but it still is an important measure of the preparation and reflection that resulted from previous subcategories. Users may want to limit the actions taken from one specific adaptive practice, such as an after-action review, in order to make this indicator manageable. Qualitative data is strongly recommended. 2017 Count, proportion Wednesday, December 13, 2017
64 Degree of change in project norms or behaviors conducive to evidence-based decision making and action, as reported by target users Refers to the extent to which target users report a change in project or program norms and behaviors toward those conducive to evidence-based decision making and action This indicator refers to the extent to which target users report a change in project or program norms and behaviors toward those conducive to reflection and evidence-based action as an outcome of the use of adaptive practices. This may include changes in beliefs, opinions, and perceptions of both the project, program, or initiative and the individuals themselves regarding the value and benefit of the adaptive management of project, programs, or initiatives. Quantitative data from surveys or questionnaires using Likert scales to determine degree of change in norms or behaviors, self and others; qualitative data from key information interviews and focus group discussions on specific norms and behaviors reported to have changed and why, and the implications of those changes Surveys, key informant interviews, focus group discussions, after-action reviews Annually The aim of this indicator is to measure the usefulness of the adaptive practices in changing the culture of the project, program, or initiative to one that values evidence-based decision making and action. It is difficult to measure norms; there are a lot of quality issues and little consistency in how norms are measured. Finding time for discussion and analysis can be challenging for a project, program, or initiative. This indicator is mostly used to evaluate the adaptive practices selected, not the program itself, which can result in a low prioritization of this indicator. It also is challenging, but not impossible, to get a good baseline of the project norms prior to the use of adaptive practices. Evidence-base for Collaborating, Learning, and Adpting (eb4cla): https://usaidlearninglab.org/eb4cla 2017 Categorical scale, proportion, qualitative Wednesday, December 13, 2017
65 Degree that adaptive practices have contributed to the objectives of a project, program, or initiative Refers to the extent to which taking an adaptive approach has contributed to achievement of project, program, or initiative objectives This indicator refers to the extent to which taking an adaptive approach has contributed to achievement of project, program, or initiative objectives. Specifically, that an adaptive practice has made an impact on a project, program, or initiative. Generally, projects, programs, or initiatives should not be required to show the unique contributions of adaptive practice on their objectives; instead, they should indicate the use of tools, approaches, and processes demonstrated to contribute to the more efficient and effective delivery of objectives. Quantitative data from surveys or questionnaires using Likert scales to determine degree of change in norms or behaviors, self and others; qualitative data from key informant interviews and focus group discussions on specific norms and behaviors reported to have changed and why, and the implications of those changes Surveys, key informant interviews, focus group discussions, after-action reviews, research studies Once, at the conclusion of the project, program, or initiative The aim of this indicator is to measure the extent to which adaptive practices improved and/or influenced project, program, or initiative outcomes. Allocating project funds for this type of study, or building it into the evaluation of the project, program, or initiative, can be challenging due to competing priorities. It is likely that there would not be a comparable project, program, or initiative that could be evaluated as a control against the project, program, or initiative that did use adaptive practices, so the study of the difference would be minimal or extrapolated. However, one could compare similar projects, one that used adaptive practices, and one that did not, and compare outcomes. Ideally, the extent of this change would be shown through the collection of stories, cases, or examples about how adaptive practices contributed to a project, program, or initiative. It also is challenging, but not impossible, to get a good baseline of the project norms prior to the use of adaptive practices. Evidence-base for Collaborating, Learning, and Adpting (eb4cla): https://usaidlearninglab.org/eb4cla 2017 Categorical scale, qualitative Wednesday, December 13, 2017
66 Number/percentage of group members able to articulate a shared vision Measures the number/percentage of participants who are able to articulate a shared vision. A shared vision is defined as a common desired future state. This indicator measures the number/percentage of group members who are able to articulate a shared vision, which is defined as a common desired future state of the group as it relates to the group’s goals and objectives. The greater the shared vision among group members, the more favorable attitudes will be toward knowledge sharing (Chow & Chan, 2008). Self-reported quantitative data; self-reported qualitative data describing shared vision Surveys, in-depth interviews, focus group discussions, content analysis of communication shared among members Periodically (before, during, and after specific activities) The aim of this indicator is to determine if group members are able to articulate a shared vision. Having a shared vision can positively impact attitudes toward knowledge sharing and norms around knowledge sharing. Having a shared vision can facilitate cooperation and collaboration. While a shared vision may have a positive impact on attitudes toward knowledge sharing and the quality of knowledge contribution, it has not been found to increase the quantity of knowledge shared. This finding may be due to group members valuing quality over quantity of knowledge contributions (Chiu et al., 2006). Studies related to this construct are based on online communities/collaborations. Further study is required in other types of communities (face-to-face and/or combination of face-to-face and online). 2017 Count, proportion, qualitative Wednesday, December 13, 2017
67 Level of shared language among group members Measures the extent to which participant have a shared language where language refers to the way in which participants exchange information, ask questions, and generally interact with each other. This indicator measures the extent to which group members have a shared language. Language refers to the way in which group members exchange information, ask questions, and generally interact with each other. Shared language includes domain-related acronyms, jargon, terms, and other subtleties and underlying assumptions that arise from working with a professional domain (Chiu et al., 2006). Self-report quantitative data using Likert scales to determine level of shared language; self-reported qualitative data describing level of shared language Surveys, in-depth interviews, focus group discussions Periodically (before, during, and after specific activities) The aim of this indicator is to determine the extent to which group members have a shared language. Many global health professionals belong to communities that help advance the field by discussing shared challenges and discovering solutions. Having a shared language allows people to better access others' information, enhances learning, facilitates an understanding of common goals, and generally expedites communication (Chiu et al., 2006; Nahapiet & Ghoshal, 1998; Lesser & Storck, 2001). Shared language positively affects knowledge quality but does not significantly impact the quantity of knowledge sharing (Chiu et al., 2006). Further research is required to better understand the role of shared language across modalities (online or face-to-face). 2017 Categorical scale, qualitative Wednesday, December 13, 2017
68 Number/percentage of group members who report knowledge sharing as a group norm Measures the number/percentage of participatnes who report knowledge sharing as a group norm, which is defined as expectations that guide behavior and reflect what a group believes as normal for the group and represent typical and/or appropriate action This indicator refers to the number/percentage of group members who report knowledge sharing as a group norm. Norms represent a degree of consensus within a social system (Nhapiet & Ghoshal, 2016). Norms that support knowledge creation and sharing emphasize openness, teamwork, cooperation (not competition), willingness to value diversity, and an openness to criticism and failure(Leonard-Baron, 1995). Self-reported quantitative data using Likert scales to determine degree of change in norms; self-reported qualitative data on specific norms reported to have changed and why, and the implications of those changes Surveys, in-depth interviews, focus group discussions Periodically (before, during, and after specific activities) The aim of this indicator is to measure knowledge-sharing norms. Positive knowledge-sharing norms can lead to favorable attitudes toward knowledge sharing and greater intention to share knowledge (Chow & Chan, 2008). Social pressure appears to have a positive influence on attitudes about and intention to share knowledge. Further research is required to better understand the role knowledge-sharing norms across community modalities (online, face-to-face, and/or combination of face-to-face and online). 2017 Count, proportion, qualitative Wednesday, December 13, 2017
69 Level of trust among group members Measures the level of trust among group members where trust indicates confidence in others related to belief in their good intent, competence and capability, reliability, and perceived openness This indicator measures the level of trust among group members. Trust is a multidimensional construct that includes: a) belief in good intent and concern of exchange partners, b) belief in their competence and capability, c) belief in their reliability, and d) belief in their perceived openness (Nahapiet & Ghoshal, 1998). Trust contributes to positive attitudes toward knowledge sharing and intention to share knowledge (Chow & Chan, 2008). Self-reported quantitative data using Likert scales to determine level of trust; self-reported qualitative data describing level of trust Surveys, in-depth interviews Periodically (before, during, and after specific activities) The purpose of this indicator is to gauge the level of trust among members in a community. Trust among members may facilitate cooperation and knowledge sharing (Nahapiet & Ghoshal, 1998; Nonaka, 1994). The role of trust in knowledge sharing needs further exploration. One study found that while trust influenced quality of knowledge shared, it did not impact the quantity of knowledge shared (Chiu et al., 2006). Further research is required to better understand the role of trust across community modalities (online, face-to-face, and/or combination of face-to-face and online). 2017 categorical scale, qualitative Wednesday, December 13, 2017
70 Level of reciprocity in knowledge sharing Measures the extent to which knowledge sharing between two parties is perceived as mutual and fair This indicator measures the extent to which knowledge sharing between two parties is perceived as mutual and fair (Chiu, Hsu, & Wang, 2006). Self-reported quantitative data using Likert scales to determine level of reciprocity; self-reported qualitative data describing level of reciprocity Surveys, in-depth interviews Periodically (before, during, and after specific activities) The aim of this indicator is to gauge the level of reciprocity in knowledge sharing. In the context of communities of practice, participant reciprocity justifies the time and effort members spend sharing knowledge and can drive knowledge sharing (Chui et al., 2006). Reciprocity is defined as knowledge exchanges that are perceived as mutual and fair (Chiu et al., 2006). Studies show that norms related to reciprocity can increase knowledge sharing (Wasko & Faraj, 2005). While norms of reciprocity may increase the quantity of knowledge shared, they did not have a positive influence on the quality of knowledge (Chiu et al., 2006). Further research is required to better understand how norms of reciprocity affect both knowledge quality and quantity in networks that use both online and face-to-face modalities for knowledge sharing. 2017 Categorical scale, qualitative Wednesday, December 13, 2017
71 Number/percentage of members who express sense of identification with a group Measures the number/percentage of particpants who report a sense of identificaiotn with a group, where identification refers to sense of belonging and inclusion This indicator measures the number/percentage of members who report a sense of identification with a group, where identification refers to sense of belonging and inclusion (Chiu, Hsu, & Wang, 2006). Self-reported quantitative data; self-reported qualitative data describing identification with group Surveys, in-depth interviews Periodically (before, during, and after specific activities) The aim of this indicator is to gauge the extent to which members identify with a group and feel a sense of belonging to a group. Virtual communities stay together by virtue of the connections that community members have with one another and their shared areas of interest (Ardichvili et al., 2003.). Identification with a group is positively associated with the quantity of knowledge shared but not with quality of knowledge shared (Chiu et al., 2006). While identification may increase the quantity of knowledge shared, it did not have a positive influence on the quality of knowledge (Chiu et al., 2006). Further research is required to better understand how identification impacts both knowledge quality and quantity in networks that use both online and face-to-face modalities for knowledge sharing. 2017 Count, proportion, qualitative Wednesday, December 13, 2017
72 Percentage of group members who have accessed knowledge from another group member in a given time period Measures the access to knowledge through social connections This indicator measures the percentage of group members who have given knowledge to or received knowledge from another group member. Self-reported quantitative data; self-reported qualitative data. Typically, the denominator would be the total number of group members, and the numerator would be the number of group members reporting receiving and/or sharing knowledge. Surveys; focus groups or other qualitative data for exploration and validation Quarterly, semiannually, or after specific activities The aim of this indicator is to understand how well knowledge flows through social networks. It can also be used as a measure of the average level of connectivity of the network/organization/group as a whole, also identifying how many sources are group members receiving or sharing knowledge. This indicator requires users to define the group of interest here—such as an entire organization, a department or other functional team, a community of practice, or Facebook group. Users will also need to define “knowledge” and "sharing of knowledge" in their particular context as well as an appropriate time period, usually monthly or quarterly, for knowledge-sharing activities. 2017 Proportion Wednesday, December 13, 2017
73 Percentage of group members who have used knowledge from another group member in a given time period Measures the use of knowledge acquired through social connections This indicator measures the percentage of group members who have used knowledge acquired from another group member. Self-reported quantitative data; self-reported qualitative data. Typically, the denominator would be the number of group members reporting giving or receiving knowledge (the numerator from indicator 72), and the numerator would be the number of group members reporting having used that knowledge. Surveys; focus groups or other qualitative data for exploration and validation Quarterly, semiannually, or after specific activities The aim of this indicator is to understand the level of use of knowledge acquired through social networks. This indicator requires users to define the group of interest here —an entire organization, a department or other functional team, a community of practice, or Facebook group. Users will also need to define “knowledge” and "use of knowledge" in their particular context as well as an appropriate time period, usually monthly or quarterly, for knowledge-sharing activities. 2017 Proportion Wednesday, December 13, 2017
74 Percentage of teams or subgroups that group members have accessed in a given time period Measures the access to diverse (or heterogeneous) social connections This indicator measures the diversity of subgroup memberships, such as a department, working group, committee, community of practice, or Facebook group, and focuses on access to diverse connections. It can also inform an understanding of the level of fragmentation—the lack of connections between functional teams or subgroups—of the network/group as a whole. Self-reported quantitative data; self-reported qualitative data. Typically, the denominator would be the total number of available functional teams or subgroups in an organization, and the numerator would be a group member's number of functional team or subgroup memberships. Surveys; focus groups or other qualitative data for exploration and validation Quarterly, semiannually, or after specific activities Connection diversity, or heterogeneity, supports the flow of knowledge through networks. The purpose of this indicator is to understand how many different functional teams or other types of subgroups each group member is connected to, and the average level of connection diversity of the network/organization/group as a whole. It can also inform an understanding of the level of fragmentation—the lack of connections between functional teams or subgroups—of the network/group as a whole. This indicator requires users to define the group of interest—such as an entire organization, department or other functional team, community of practice, or Facebook group—as well as key subgroups. For example, an entire organization may be defined as group, with departments, committees, internal communities of practice, or roles as subgroups. Users will also need to define “knowledge” and "sharing knowledge" in their particular context as well as an appropriate time period, usually monthly or quarterly, for knowledge-sharing activities 2017 Proportion Wednesday, December 13, 2017
75 Percentage of teams or subgroups that group members have used in a given time period Measures the use of diverse (or heterogeneous) social connections This indicator measures engagement in diverse subgroup memberships, such as a department, working group, committee, community of practice, or Facebook group, and focuses on the use or value of diverse connections. Self-reported quantitative data; self-reported qualitative data. Typically, the denominator would be a group member's total number of team or subgroup memberships (the numerator from indicator 74), and the numerator would be the number of those functional teams or subgroup memberships in which the group member is engaged. Surveys; focus groups or other qualitative data for exploration and validation Quarterly, semiannually, or after specific activities Connection diversity, or heterogeneity, supports the flow of knowledge through networks. The aim of this indicator is to understand how many different functional teams or other types of subgroups each group member is engaged in, and the average level of use of these connections by the network/organization/group as a whole. This indicator requires users to define the group of interest—such as an entire organization, a department or other functional team, a community of practice, or Facebook group—as well as any subgroups. For example, an entire organization may be defined as group, with departments, committees, internal communities of practice, or roles as subgroups. Users will also need to define “knowledge” and "engagement", in their particular context, as well as an appropriate time period, usually monthly or quarterly, for knowledge-sharing activities. 2017 Proportion Wednesday, December 13, 2017
76 Number/percentage of group members who believe that knowledge sharing will produce positive outcomes Measures the anticipated outcomes of a specific behavior by group members This indicator measures the anticipated outcomes of knowledge-sharing behavior. Outcome expectations refer to an individual's belief that a certain behavior, such as knowledge sharing, will lead to a specific outcome, such as recognition). These are the outcomes expected from performing a certain behavior (Murray-Johnson et al., 2001; Bandura,1977). Self-reported quantitative data using Likert scales; self-reported qualitative data on outcome expectations reported to have changed and why, and the implications of those changes Surveys, in-depth interviews Periodic (before, during, and after specific activities) Outcome expectations can be measured at the community or individual level. In the context of KM, individuals may share knowledge because they have the expectation that they will be viewed as skilled, knowledgeable, or respected (Chiu et al., 2006). Community outcome expectations have a positive influence on quality and quantity of knowledge shared (Chui et al., 2006). Further research is required to better understand how identification impacts both knowledge quality and quantity in networks that use both online and face-to-face modalities for knowledge sharing. 2017 Count, proportion, qualitative Wednesday, December 13, 2017
77 Number/percentage of group members confident in ability to share knowledge Measures the extent to which participants are confident to share knowledge This indicator measures the extent to which group members are confident to share knowledge (perceived self-efficacy). As perceptions of efficacy increase, individuals are more likely to make a change in their behavior. Self-reported quantitative data using Likert scales; self-reported qualitative data on perceived self-efficacy reported to have changed and why, and the implications of those changes Surveys, in-depth interviews Periodic (before, during, and after specific activities) The aim of this indicator is to measure self-efficacy related to knowledge sharing, as perceived self-efficacy influences motivation and behavior (Bandura, 1986). In the context of knowledge sharing, Hsu et al. (2007) describe a host of perceived capabilities—such as generating, combining, and sharing knowledge—to consider, related to knowledge sharing and management. An individual's perceived knowledge-sharing self-efficacy has a positive effect on their knowledge-sharing behavior (Hsu et al., 2007). Further research is required to better understand the impact of knowledge-sharing self-efficacy on knowledge-sharing behavior, beyond current research that examines relationships in communities comparing virtual, face-to-face, and a combination of virtual and face-to-face environments. 2017 Count, proportion, qualitative Wednesday, December 13, 2017