Perceived usefulness of a distributed community-based syndromic surveillance system: a pilot qualitative evaluation study
© Reeder et al; licensee BioMed Central Ltd. 2011
Received: 18 February 2011
Accepted: 14 June 2011
Published: 14 June 2011
We conducted a pilot utility evaluation and information needs assessment of the Distribute Project at the 2010 Washington State Public Health Association (WSPHA) Joint Conference. Distribute is a distributed community-based syndromic surveillance system and network for detection of influenza-like illness (ILI). Using qualitative methods, we assessed the perceived usefulness of the Distribute system and explored areas for improvement. Nine state and local public health professionals participated in a focus group (n = 6) and in semi-structured interviews (n = 3). Field notes were taken, summarized and analyzed.
Several emergent themes that contribute to the perceived usefulness of system data and the Distribute system were identified: 1) Standardization: a common ILI syndrome definition; 2) Regional Comparability: views that support county-by-county comparisons of syndromic surveillance data; 3) Completeness: complete data for all expected data at a given time; 4) Coverage: data coverage of all jurisdictions in WA state; 5) Context: metadata incorporated into the views to provide context for graphed data; 6) Trusted Data: verification that information is valid and timely; and 7) Customization: the ability to customize views as necessary. As a result of the focus group, a new county level health jurisdiction expressed interest in contributing data to the Distribute system.
The resulting themes from this study can be used to guide future information design efforts for the Distribute system and other syndromic surveillance systems. In addition, this study demonstrates the benefits of conducting a low cost, qualitative evaluation at a professional conference.
Distribute is a community-based, population-level public health information system for syndromic influenza-like illness (ILI) surveillance that displays aggregated, de-identified, public health surveillance data collected from emergency departments (EDs) by state and local health jurisdictions . Distribute was first organized by the International Society for Disease Surveillance (ISDS) in 2006 as a proof of concept. With support from the Markle Foundation and the United States Centers for Disease Control and Prevention (CDC), Distribute grew from participation by 8 state and large metropolitan health jurisdictions, representing summarized data on 10% of all US emergency department (ED) visits, to a nation-wide system that currently receives data from 43 health jurisdictions that represents over 50% of the US population and summarizes more than 35% of all ED visits nationwide. Distribute currently has participation from all ten Health and Human Services (HHS) surveillance regions, includes data from over one million ED visits each week, and displays updated visualizations of ILI trends in the US on Public and Restricted access web sites.
Distribute serves as an example of a new paradigm in the collection and sharing of public health surveillance data . The roots of automated syndromic surveillance systems began just prior to the 2001 Anthrax attacks  with systems which automatically classify clinic visits and other data according to loose "syndromic" criteria and present graphic and statistical views of summarized data based on counts of those visits, and their numbers in proportion to population and denominators derived from utilization of health care services. The literature describes both the early experience and growth of these systems  and their evolving design and implementation . With the development of health information exchanges (HIEs), and of methods for the structuring of, and access to, regional data across multiple health care systems, public health gained access to larger sources of both visit level and summarized "syndromic" data . In part, Distribute developed as a way for health departments to share and compare these summarized, syndromic data, regardless of whether those data were obtained from integration of data from individual providers or hospitals, or from a single large hospital network or HIE.
In the early development of the Distribute project, the apparent benefits of mandated standards for syndrome definitions were weighed against two often overlooked issues: barriers to entry and ability to compare data across jurisdictions. The need to adopt a mandated standard prior to joining the network created a potential technical barrier that could delay or prevent interested jurisdictions from participating. In addition, although mandated standard syndrome definitions could improve data comparison on average across the whole network, there was concern on the part of project participants that this practice might decrease accuracy and utility locally. That is, a local definition of a syndrome might best reflect local variations in coding or clinical practice that were reflected in the data, and might most accurately reflect the underlying disease being tracked. To address these issues, the Distribute project adopted the use of two separate syndromes: 1) a more narrow and specific definition, following a traditional clinical definition of ILI, and 2) another more sensitive definition, as a broad febrile, respiratory and influenza-like syndrome . In a preliminary comparison, two Distribute participating sites shared local coding of their narrow and broad ILI syndrome definitions and applied each other's definitions to their own local data. The pilot findings suggested that data using locally applied syndromes were better correlated with population-level viral surveillance data [8, 9].
Utility and Usability
Utility and usability issues are related and often difficult to separate in the evaluation of information systems [10, 11]. Utility, or perceived usefulness, refers to the extent to which an information system or its output provides benefit or value [10–12]. Usability, or perceived ease of use, refers to the degree of effort required to use an information system or its output [10–12]. Many international standards for system design conflate usability and utility, incorporating aspects of utility and usability in a single definition . Because this project is not an interaction study, we focus on the utility, or perceived usefulness, of the Distribute system and its data outputs while acknowledging that usability contributes to utility.
Current initiatives of the Distribute project place a high priority on improving the utility and usability of the information system and extending functionality to support public health decision-making and practice. Qualitative methods are important in the evaluation of health information systems [14–16]. It is important to engage practitioners in a discussion of their needs and proposed system features to mitigate common informatics risk factors for failed system adoption [17–19]. Following the idea that "the simplest way to assess usefulness is to ask those involved in public health practice", we engaged epidemiologists and other public health practitioners in a pilot study to collect quality improvement feedback for the Distribute system. This pilot study was undertaken to inform the design of a larger quality improvement investigation by including participants who were current members of the Distribute community of practice and those interested in learning more about the Distribute system.
Selected metadata elements from the Restricted site of the Distribute system
The organizational area view of the data
City, State, Region, Federal Region
The syndrome definition preferred by the data providers for display of their uploaded data
Defined by data provider. Examples: ILI-broad, ILI-narrow, GI-broad, GI-narrow, Temperature, Disposition
All syndrome definitions for data submitted by data providers
Defined by data provider. Examples: ILI-broad, ILI-narrow, GI-broad, GI-narrow, Temperature, Disposition
Details of syndrome definitions for data submitted by providers
Defined by data provider
The criteria by which data are stratified
Defined by data provider. Examples: age group, zip3, temperature, disposition
Facilities Sending Data
Enumerated list of health care facilities that submit data to be aggregated to the data provider
Variable by number of participating facilities in the jurisdiction of the data provider
Facilities in Jurisdiction
Enumerated list of health care facilities in the jurisdiction of the data provider
Variable by number of total facilities in the jurisdiction of the provider
Types of facilities for which visit data are submitted
Variable by data provider. Typically Emergency Departments and Urgent Care facilities
Typical Record Count
Description of expected record volume based on historical patterns
Variable by data provider
Description of the population and the number and type of health care facilities in the jurisdiction
Variable by data provider
Local System Description
Description of the local surveillance system
Examples: ESSENCE II, EARS, RODS, BioSense
Data Source History
Description of the onset date of data availability
Variable by data provider
Last Data Point Visualized
Description of the last available date for which data are available
Variable by data upload pattern
Calculated value based on the last available date for which data are available
Variable by data upload pattern
Last Upload Date
Description of the last date of data upload
Variable by data upload pattern
Calculated value based on the last date of data upload
Variable by data upload pattern
Description of the frequency with which a data provider typically uploads data
Variable by data provider
This study was conducted at the Washington State Public Health Association (WSPHA) Joint Conference on Health on October 11-12, 2010 in Yakima, WA USA. The conference is an annual meeting of public health practitioners that includes participants from the Washington State Department of Health and local health jurisdictions in the State of Washington. The study protocol received approval from the University of Washington Institutional Review Board.
Study participants by role, agency level, familiarity with Distribute and method of data collection
Familiar with Distribute?
Method of Data Collection
Communicable Disease Director
Public Health Planner
Health Officer (Former)
Types of questions asked of focus group and interview participants
• What does this graph tell you?
• What might be missing from this graph?
• Would graphed data like these have been useful during the 2009-2010 influenza A/H1N1 season?
• Would graphed data like these be useful during seasonal influenza time periods?
• Is this a good way to display the data?
• How might this graph be more useful?
• What do you like about Distribute?
• How useful is Distribute?
• How could Distribute be better?
Notes taken by Distribute team members during the focus group and interviews were summarized and analyzed to identify patterns and themes [28, 29]. Study notes were stripped of identifying information before analysis. Conference attendees were referred to by role and an assigned study code. Specific data extracted from study notes pertained to opinions of conference attendees about usefulness, suggested features and other recommendations for improvement of the Distribute system. Themes were identified from focus group and interview notes by grouping similar responses and creating names and descriptions of the groupings [26, 28].
Themes that contribute to information system and data usefulness
A common fixed case definition of influenza-like illness (ILI)
Division of views into different regions to aid in comparisons
Extent to which the expected data are provided
Extent to which data are representative of populations to facilitate generalizations
Group or macro-level variables that frame the data; also referred to as metadata
Knowledge of the extent to which the data presented are valid and timely
Availability of data and processing features to meet the surveillance needs of the local or state health jurisdiction
With regard to standardization, participants recognized a need for a common influenza-like illness (ILI) syndrome definition in order to make comparisons between data sets more meaningful and relevant. For regional comparability, participants wanted to see views of different regions to aid in comparisons. In particular, they expressed a desire for views that support county-by-county comparisons of syndromic surveillance data, separate regional views of Western Washington and Idaho and views by preparedness regions as an alternative if representative views of HHS Regions were unavailable.
The completeness theme is described by participant desire for completeness in the data sets submitted from each data provider. Coverage refers to participant desire to know that data are representative of a given population in order to generalize findings across the population. In particular, participants expressed a need for data coverage of all Washington State. Participants noted that during the second wave of H1N1, the eastern side of the state, which includes one-third of the population, initially saw two-thirds of all cases (consistent with the graph of data for Eastern Washington in Distribute). One participant noted minimal use of Distribute due to lack of close neighbors for comparison. Context refers to the expressed need for metadata incorporated into views to facilitate understanding of graphed data. Participants suggested inclusion of the number of hospitals, emergency departments, patients, data providers and denominators for the total number of ED visits as contextual information in the graphed data views.
Participants expressed a need for trusted data, that is, confidence that data are valid and timely. They wanted to know that data were verified through defined quality assurance processes that are conducted on a regular basis. Participants reported that these data are useful for consistent events such as seasonal influenza and local data could be used to declare a local epidemic. Consistent, reliable data were cited as more useful to stand-down from an emergency than to issue an initial alert; participants hypothesized that during H1N1 the data in Distribute might have been more useful after confirmation of an actual event occurrence. Customization of available data and data processing capabilities to meet the surveillance needs of each local and state health jurisdiction was requested by participants. The ability to overlay graphs with other graphs and create labels on request was envisioned as a useful feature.
Participants acknowledged the value of the Public site as a tool to view national trends. One participant cited the need for a surveillance system with a low-impact training cost that anyone can use and that is largely automated to minimize maintenance. Requests for additional data viewed as overlays to graphs included: metadata already available elsewhere in the system, county school absenteeism rates and a state view that includes data from all clinics in the Group Health Cooperative health care system. An additional result of the focus group was the expressed desire to participate as a data contributor by one participant from a county-level health jurisdiction.
The limitations of this pilot study include its restricted time frame for data collection and the regional population from which the sample is drawn.
Our results suggest themes that can be used to guide future evaluation and design iterations to improve support for public health surveillance. These results are important for improvements to syndromic surveillance of influenza-like-illness in the Distribute system but can also help improve syndromic surveillance efforts overall, regardless of the disease or surveillance system. For example, gastrointestinal (GI) indicators are currently being piloted in the Distribute system as a demonstration of system extensibility for surveillance of other diseases. Themes from this qualitative evaluation study can inform GI syndromic surveillance efforts as they are expanded within Distribute or any other surveillance system. These themes should be further explored by including public health practitioners in information design efforts. In addition, this study demonstrates that the application of qualitative methods in an "evaluation of opportunity" at a public health practice gathering can be a simple way to solicit feedback for the improvement of a working public health information system. Lastly, we found that efforts of this type can be useful in recruiting new users to participate in the system and expand the community of practice.
The community-based approach employed in the Distribute project focuses on data use and has resulted in convergence toward a recognized need for a common influenza-like illness (ILI) syndrome definition to compare data sets across jurisdictions among the Distribute community of practice. The findings of this pilot study are consistent with this trend. However, to maintain local utility of data, existing data providers need not, and should not, abandon prior syndrome definitions, but rather should submit an additional common definition while continuing to send data aggregated by existing syndrome definitions that have local meaning.
Syndromic surveillance data, if available, are used by public health practitioners as early indicators of influenza outbreaks within their own jurisdictions and adjacent health jurisdictions. These data are used in conjunction with other data sources, such as laboratory results, to triangulate disease prevalence. To aid decision-making for interventions that help contain outbreaks, improved data access and visualizations for syndromic surveillance data are needed. The context of how data are used for individual tasks is important [30, 31] and data quality cannot be assessed independent of the people who use them . Information systems are part of the contexts of use for data; the utility and usability of these systems are factors in the utility and usability of data [13, 33, 34]. Three contexts of use for syndromic surveillance information systems - routine, anticipated threat and present threat - have been recognized as key inputs to tasks for analysis and characterization of syndromic surveillance data for decision-making . In addition, our pilot results indicate contextual information - metadata related to hospitals, patients, data providers, ED visit counts, etc. - provides meaning of syndromic surveillance data to epidemiologists.
To better understand contexts of information system and data use, future efforts should identify the specific ways in which epidemiologists and others use metadata to discern meaning from data, the best ways to include annotations in data visualizations and different ways to display information for population health surveillance. Interviews with a larger number of participants will help refine the specific meanings of our themes, gauge reactions to anticipated results from common ILI syndrome definition efforts and explore specific needs around regionalization and other identified themes. Future work to engage a more geographically diverse population of participants will help validate these results outside the pilot region of Washington State. This future work will be informed by pilot results in two related areas: 1) usability studies to improve the design of Distribute as an information resource that an epidemiologist might check before making a phone call to a colleague in a different jurisdiction or region and 2) utility studies to assess the value of Distribute to participants, their organizations and community population health outcomes.
The authors gratefully acknowledge the International Society for Disease Surveillance as the financial sponsor of this study and the Markle Foundation and CDC for supporting the Distribute project. We thank Rebecca Hills, PhD (Cand.), MSPH for her contributions during the study and Laura Streichert, PhD, MPH and Janet Baseman, PhD, MPH for their comments on drafts of the manuscript. We also thank the two reviewers of this manuscript for their helpful suggestions. This study would not have been possible without the public health professionals who donated their time to share information about their work and participation of the Distribute community of practice.
- Olson DR, Paladini M, Buehler J, Mostashari F: Review of the ISDS Distributed Surveillance Taskforce for Real-time Influenza Burden Tracking & Evaluation (DiSTRIBuTE) Project 2007/08 Influenza Season Proof-of-concept Phase. Advances in Disease Surveillance. 2008, 5: 185-Google Scholar
- Diamond CC, Mostashari F, Shirky C: Collecting And Sharing Data For Population Health: A New Paradigm. Health affairs. 2009, 28: 454-10.1377/hlthaff.28.2.454.PubMedView ArticleGoogle Scholar
- Lober WB, Karras BT, Wagner MM, Overhage JM, Davidson AJ, Fraser H, Trigg LJ, Mandl KD, Espino JU, Tsui FC: Roundtable on bioterrorism detection: information system-based surveillance. J Am Med Inform Assoc. 2002, 9: 105-115. 10.1197/jamia.M1052.PubMedPubMed CentralView ArticleGoogle Scholar
- Mandl KD, Overhage JM, Wagner MM, Lober WB, Sebastiani P, Mostashari F, Pavlin JA, Gesteland PH, Treadwell T, Koski E: The Practice of Informatics - Review Paper - Implementing Syndromic Surveillance: A Practical Guide Informed by the Early Experience. Journal of the American Medical Informatics Association: JAMIA. 2004, 11: 141-PubMedPubMed CentralView ArticleGoogle Scholar
- Lober WB, Trigg L, Karras B: Information system architectures for syndromic surveillance. MMWR Morb Mortal Wkly Rep. 2004, 53 (Suppl): 203-208.PubMedGoogle Scholar
- Hills RA, Lober WB, Painter IS, Workshop: Biosurveillance, Case Reporting, and Decision Support: Public Health Interactions with a Health Information Exchange. 2008Google Scholar
- Olson DR, Heffernan RT, Paladini M, Konty K, Weiss D, Mostashari F: Monitoring the Impact of Influenza by Age: Emergency Department Fever and Respiratory Complaint Surveillance in New York City. PLoS Med. 2007, 4: e247-10.1371/journal.pmed.0040247.PubMedPubMed CentralView ArticleGoogle Scholar
- Pendarvis J, L. ME, Paladini M, Gunn J, Olson DR: Age Specific Correlations between Influenza Laboratory Data and Influenza-like Syndrome Definitions in Boston and New York City. Advances in Disease Surveillance. 2008, 5: 53-Google Scholar
- Paladini M, Pendarvis J, Murray EL, Gunn J, Olson DR: A Comparison of Locally Developed Influenza-like Syndrome Definitions Using Electronic Emergency Department Data in Boston and New York City. Advances in Disease Surveillance. 2008, 5: 50-Google Scholar
- Grudin J: Utility and usability: research issues and development contexts. Interacting with Computers. 1992, 4: 209-217. 10.1016/0953-5438(92)90005-Z.View ArticleGoogle Scholar
- McLaughlin J, Skinner D: Developing Usability and Utility: A Comparative Study of the Users of New IT. Technology Analysis & Strategic Management. 2000, 12: 413-423. 10.1080/09537320050130633. Routledge; 413-423View ArticleGoogle Scholar
- Davis FD: Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly. 1989, 13:Google Scholar
- Bevan N: International standards for HCI and usability. International Journal of Human-Computer Studies. 2001, 55: 533-552. 10.1006/ijhc.2001.0483.View ArticleGoogle Scholar
- Ash JS, Guappone KP: Qualitative evaluation of health information exchange efforts. Journal of Biomedical Informatics. 2007, 40: S33-S39. 10.1016/j.jbi.2007.08.001.PubMedPubMed CentralView ArticleGoogle Scholar
- Rose AF, Schnipper JL, Park ER, Poon EG, Li Q, Middleton B: Using qualitative studies to improve the usability of an EMR. Journal of Biomedical Informatics. 2005, 38: 51-60. 10.1016/j.jbi.2004.11.006.PubMedView ArticleGoogle Scholar
- Hill HK, Stewart DC, Ash JS: Health Information Technology Systems profoundly impact users: a case study in a dental school. Journal of dental education. 2009, 74: 434-445.Google Scholar
- Wells S, Bullen C: A near miss: the importance of context in a public health informatics project in a New Zealand case study. Journal of the American Medical Informatics Association: JAMIA. 2008, 15:Google Scholar
- Littlejohns P, Wyatt JC, Garvican L: Evaluating computerised health information systems: hard lessons still to be learnt. BMJ. 2003, 326: 860-863. 10.1136/bmj.326.7394.860.PubMedPubMed CentralView ArticleGoogle Scholar
- Heeks R: Health information systems: Failure, success and improvisation. International journal of medical informatics. 2006, 75: 125-10.1016/j.ijmedinf.2005.07.024.PubMedView ArticleGoogle Scholar
- Thacker SB, Parrish RG, Trowbridge FL: A method for evaluating systems of epidemiological surveillance. World health statistics quarterly Rapport trimestriel de statistiques sanitaires mondiales. 1988, 41: 11-18.PubMedGoogle Scholar
- Lombardo J, Burkom H, Elbert E, Magruder S, Lewis SH, Loschen W, Sari J, Sniegoski C, Wojcik R, Pavlin J: A systems overview of the Electronic Surveillance System for the Early Notification of Community-Based Epidemics (ESSENCE II). Journal of urban health: bulletin of the New York Academy of Medicine. 2003, 80: 32-42.Google Scholar
- Hutwagner L, Thompson W, Seeman GM, Treadwell T: The Bioterrorism Preparedness and Response Early Aberration Reporting System (EARS). Journal of Urban Health: Bulletin of the New York Academy of Medicine. 2003, 80: 89-96.Google Scholar
- Tsui FC, Espino JU, Dato VM, Gesteland PH, Hutman J, Wagner MM: Technical description of RODS: a real-time public health surveillance system. Journal of the American Medical Informatics Association. 2003, 10: 399-408. 10.1197/jamia.M1345.PubMedPubMed CentralView ArticleGoogle Scholar
- Bradley CA, Rolka H, Walker D, Loonsk J: BioSense: Implementation of a National Early Event Detection and Situational Awareness System. IMMW MMWR Morbidity & Mortality Weekly Report. 2005, 11-19.Google Scholar
- Blomberg J, Giacomi J, Mosher A, Swenton-Wall P: Ethnographic Field Methods and Their Relation to Design. Participatory design: principles and practices. Edited by: Schuler D, Namioka A. 1993, Hillsdale, N.J.: L. Erlbaum Associates, 123-155.Google Scholar
- Krueger RA, Casey MA: Focus groups: a practical guide for applied research. 2000, Thousand Oaks, Calif.: Sage PublicationsView ArticleGoogle Scholar
- Ulin PR, Robinson ET, Tolley EE: Qualitative methods in public health: a field guide for applied research. 2005, San Francisco, CA: Jossey-BassGoogle Scholar
- Boyatzis RE: Transforming qualitative information: thematic analysis and code development. 1998, Thousand Oaks, CA: Sage PublicationsGoogle Scholar
- Miles MB, Huberman AM: Qualitative data analysis: an expanded sourcebook. 1994, Thousand Oaks: Sage PublicationsGoogle Scholar
- Wang RY, Strong DM: Beyond Accuracy: What Data Quality Means to Data Consumers. Journal of management information systems: JMIS. 1996, 12: 5-View ArticleGoogle Scholar
- Bevan N: Quality in use: Meeting user needs for quality. JSS Journal of Systems & Software. 1999, 49: 89-96.View ArticleGoogle Scholar
- Strong DM, Lee YW, Wang RY: Data Quality in Context. Communications of the ACM. 1997, 40: 103-View ArticleGoogle Scholar
- Orr K: Data quality and systems theory. Commun ACM. 1998, 41: 66-71.View ArticleGoogle Scholar
- Boddy D, King G, Clark JS, Heaney D, Mair F: The influence of context and process when implementing e-health. BMC Medical Informatics & Decision Making. 2009, 9: 1-9. 10.1186/1472-6947-9-1. BioMed Central; 1-9View ArticleGoogle Scholar
- International Society for Disease Surveillance (ISDS) Meaningful Use Workgroup: Final Recommendation: Core Processes and EHR Requirements for Public Health Syndromic Surveillance. 2010Google Scholar
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.