Skip to content

Innovations in Geographic Information Systems Mapping Technology: GIS Working Group Meeting, October 2017

Innovations in Geographic Information Systems Mapping Technology: GIS Working Group Meeting, October 2017To promote and improve the use of geospatial data by the implementing partners of the United States President’s Emergency Plan for AIDS Relief (PEFPAR), MEASURE Evaluation—funded by the United States Agency for International Development (USAID) and PEPFAR—convened a meeting of the Geographic Information Systems (GIS) Working Group in Washington, DC, on October 23, 2017. The group has been meeting at least annually since 2000, giving GIS specialists and users a regular opportunity to share their experiences with spatial data and platforms, and to keep up to date on recent developments in GIS technology and its uses for global public health. Over the years, several springboard discussions from these meetings have resulted in publications and have led to further collaborative work within the project.

This report shares the insights, innovations, and research that engaged the working group at this meeting. Presentations covered a wide array of topics but can be distilled to two overarching ones: “innovations” and “research and discoveries.” The Innovations section of this report describes presentations related to new tools, technologies, and other offerings of some of our guest experts. The Research and Discoveries section showcases some of the work that our presenters have done on upcoming tools, techniques, and data analysis.

Access the report and meeting presentations.

Data Use in the Democratic Republic of the Congo’s Malaria Program: Results from Seven Provinces

Data Use in the Democratic Republic of the Congo's Malaria Program: Results from Seven Provinces   Evidence-informed decision making is essential for the success of health systems, programs, and services. Global commitments to improving health systems and outcomes have led to improved monitoring and evaluation (M&E) and health information systems, thus providing an opportunity to use data for decision making and not simply for reporting.

Overall, the relationships among improved information, demand for data, and continued data use constitute a cycle that leads to improved health programs and policies. Improving data demand and use (DDU) is necessary to improve the effectiveness and sustainability of a health system.

MEASURE Evaluation, which is funded by the United States Agency for International Development and the United States President’s Malaria Initiative, undertook an assessment to understand the data use context for those working in the Democratic Republic of the Congo (DRC) in the National Malaria Control Program (NMCP) at the provincial and health zone levels in seven provinces (Bukavu, Haut Lomami, Kasai Central, Kasai Oriental, Lomami, Sankuru, and Tanganyika), as well as implementing partners working with the NMCP at the provincial level. The purpose of this assessment was to identify how data are currently being used for decision making and how future interventions can be designed to promote the demand for and use of data in decision making.

Access the resource.

Evaluation Capacity Building: A collective learning experience

By Stephanie Watson-Grant, DrPH, MEASURE Evaluation

Capacity building for rigorous evaluations is not a phrase that was part of my vocabulary ten or even five years ago. I got involved in evaluation capacity building (ECB) through my work in Liberia where I was part of a team from MEASURE Evaluation working with the Ministry of Health and M&E officers from seven counties on outcome monitoring studies. I loved working with my Liberian colleagues. They were enthusiastic about the activities and an assistant minister of health always came to our training and dissemination events.

My project—MEASURE Evaluation, funded by the United States Agency for International Development (USAID)—was ending one of its phases as we developed a three-year process for capacity building to help Liberians eventually conduct future studies. I didn’t get the opportunity to implement the plan, but my experience in Liberia led to my coordinating a group to develop capacity planning guidance for our project’s next phase.

We developed guidelines and a capacity-building plan and then we wondered: what does capacity building for evaluation look like in the real world? Where and how do we apply it?

The opportunity came in Kenya. An evaluation was in the works to monitor outcomes of activities to assist orphans and vulnerable children. As the three studies were being planned, a colleague and I worked with the study lead to design a way to partner with a local research organization. The experience was an excellent start and gave us a lot to think about. So, we did it again in Kenya and in Malawi, South Africa, and Uganda.

I learned so much from this experience. For starters, there are parts of evaluation capacity building I had never considered. I realized it was more than a methods outline, sampling plan, and data analysis to demonstrate change. Those abilities are critical, but others are equally important—for example, ensuring effective leadership and that the appropriate staff are involved; operations and management processes with a known communication structure and a sound work plan; skills, and experience with electronic data capture and analysis; efficient data collection and data management; and—probably most important—the ability to share understandable findings and use them to improve health outcomes.

As we implemented the assessment tool and capacity plan in different settings, we tweaked it as we learned what evaluation capacity building looks like in practical terms. For example, the assessment process was necessarily subjective as we shared a collaborative learning experience. The planning process was unavoidably simple, because it had to be achievable within the timeline of our contract with the partner.

The lesson I take away is we aren’t really building capacity, we are enhancing existing capacity—ours and theirs. Our research partners are already high-functioning research entities when we start working with them. But, through our work together, through our shared experience, we’ve each left the other a little more capable.

For more information, visit:

Stages of Health Information System Improvement: Strengthening the Health Information System for Improved Performance

Stages of Health Information System Improvement: Strengthening the Health Information System for Improved PerformanceThis brief describes a suite of tools under development by MEASURE Evaluation to provide systematic guidance on how to assess the existing status of a health information system (HIS) and identify specific improvements that take an HIS through a defined progression toward optimum functioning. The goal of this suite of tools is to answer the question: “What are the stages of HIS development?”

Access the brief.

Health Information Systems Interoperability Toolkit

HIS Interoperability Maturity Toolkit slider-min

Enabling exchange of data between disparate health information systems—or interoperability of health information systems—holds great promise for overcoming barriers to data quantity, quality, and accessibility.

Many low-resource settings, however, do not have the guidance and tools to assess their capacity to implement interoperable systems. Some factors critical to successful implementation of interoperable information systems have not previously been well-defined. To address this gap, the MEASURE Evaluation project, funded by the United States Agency for International Development (USAID), in collaboration with the Digital Health and Interoperability Technical Working Group of the Health Data Collaborative, have developed an HIS Interoperability Maturity Toolkit.

The kit contains three main pieces: a maturity model, an assessment tool, and a users’ guide. It also offers a complete list of the references consulted in a literature review that was conducted as part of the toolkit’s development.

The HIS interoperability maturity model identifies the major components of HIS interoperability and lays out an organization’s growth pathway through these components. Countries can use the assessment tool to determine their HIS interoperability maturity level systematically. Using the assessment results, countries can create a path toward strengthening their HIS interoperability and building resilient systems.

The toolkit is version 0.5. In the coming months, we will be learning from early adopters and pilot testing the toolkit. In late 2018, we will publish an updated version with material gleaned from lessons learned and knowledge gained from users.

Access the toolkit.

How Social Networks Can Improve the Use of Data

How Social Networks Can Improve the Use of DataA social network (SN) may be defined as an electronic platform that allows participants to create personal profiles and build a network of connections with other users, enabling multidirectional communication and collaboration on the platform (Capurro, Cole, Echavarria, Joe, Neogi, et al., 2014). SNs enable users to generate and share content with others. As a mechanism for collaborative discussion and problem solving, SNs are a low-cost way to communicate rapidly and to promote social support and social influence.

Recent literature reviews have examined the use of SN platforms for public health practice and research, primarily in high- and middle-income country settings. Cappurro, et al. (2014) found that SN sites were mainly used to reach hard-to-reach populations; promote healthy behaviors; and for disease surveillance and communications during natural disasters. However, there is the potential for SN platforms to be used as “persuasive technology,” helping to change user attitudes or behaviors through persuasion and social influence (Halko & Kientz, 2010). Connecting groups of people can be a means to provide social and emotional support, advice, and education, which can promote healthy behaviors. Literature reviews conducted in 2014 and 2015 on the use and effectiveness of SN sites for health behavior change found that SN interventions positively affect health behaviors (Maher, Lewis, Ferrar, Marshall, De Bourdeaudhuij, et al., 2014; Laranjo, Arguel, Neves, Gallagher, Kaplan, et al., 2015). In these cases, most interventions conducted were information sharing and advice, with only one involving data sharing to promote accountability and friendly social competition (Foster, Linehan, Kirman, Lawson & James, 2010).

Significant human and financial resources have been invested in information systems, with the goal of producing high-quality data that are used to meet decision-making needs at all levels of a health system. For data to be used for decision making, they must be of high quality (i.e., available, timely, and complete), and then analyzed, synthesized, interpreted, and reviewed (Nutley & Reynolds, 2013). These are the key elements of the data use process. The data use process is impacted by the confluence of technical, organizational, and behavioral factors that facilitate or constrain the use of data. For example, data interpretation and review may be impeded because mechanisms for review (e.g., meetings) occur infrequently and require resources (e.g., time and funding). Moreover, data analysis and interpretation skills may be limited. Organizations may prioritize data quality and reporting while if they lead to the use of information.

Social network platforms can help to overcome barriers to data use, by providing a mechanism for diverse types of users to interact, share information and feedback, and review and discuss data. MEASURE Evaluation explored how SN platforms are being used to improve data collection, data quality, and data review and interpretation, and how their potential can be harnessed to facilitate data-informed decision making.

Access the resource.

Civil Registration and Vital Statistics System – End-of-Project Assessment Report

Civil Registration and Vital Statistics System – End-of-Project Assessment ReportThe MEASURE Evaluation PIMA project’s goal was to assist the Government of Kenya to strengthen monitoring and evaluation systems, including the civil registration system which is the basis of all vital statistics in Kenya. The project targeted four main areas: (1) increasing the monitoring and evaluation capacity of the Department of Civil Registration Services, (2) expanding birth and death registration coverage, (3) improving data quality, and (4) enhancing use of quality vital statistics for evidence-based decision making at national and county levels. This scope was informed by the project’s 2013 baseline civil registration and vital statistics system assessment and a separate assessment of the capacity of the department to undertake monitoring and evaluation functions. The recommendations from these assessments, coupled with objectives prioritized in the Department of Civil Registration Services Strategic Plan 2013–2017, guided development of the project’s interventions. In the project’s last year of implementation, MEASURE Evaluation PIMA sought to assess the status of the civil registration system. This end-of-project assessment aims to determine the level for which support for the system has improved availability and use of quality vital statistics among stakeholders at different levels while also recognizing the broader legal and administrative challenges inherent in ensuring a functioning system.

The assessment involved a desk review of available documents and onsite analysis of civil registration processes and the electronic system at select civil registration offices. Structured interviews with key informants—including staff from the Department of Civil Registration Services, registrars at the county level, personnel in select county departments of health, and implementing partners—were conducted. A focus group discussion was held with select local registration agents in Kakamega County. Quantitative data were extracted from vital statistics reports, routine monitoring reports, and the health information system. These data were analyzed using Microsoft Excel, and the analysis involved computation of basic descriptive indicators defined in the project’s performance monitoring plan.

This report outlines findings from the assessment and provides recommendations on how gaps in specified aspects of the system can be bridged. Specifically, the assessment reveals commendable efforts to strengthen the civil registration and vital statistics system, which have resulted in improvements in the quality of statistics produced. Vital statistics are readily available, and reporting by government agencies has been harmonized. Guidelines implemented for certifying and coding causes of death have resulted in the availability of higher quality cause-of-death information from health facilities. Data quality assurance procedures need to be improved, however, to increase reporting and enable use of mortality statistics at the international level. The report provides documentation on project achievements and lessons learned.