Medical record review is a common data collection method for conducting epidemiologic research. Although investigators in sites from various geographical locations often collaborate to include diverse populations to enhance generalizability of study results, idiosyncrasies of site data and differences in quality of data abstraction between sites can introduce variability. Abstractor training is a key element in minimizing interobserver variability to create reliable data collection procedures. The choice between in-person vs. remote or simultaneous vs. sequential abstractor training has considerable consequences for time and resource utilization. Advances in information technology have produced readily available, low cost, efficient alternatives to traditional training approaches. Faced with the challenges of collecting complex medical record data for a follow-up study of breast cancer treatment effects in older women (BOWII), we conducted a web-based (webinar) abstractor training session to standardize training across six individual Cancer Research Network (CRN) sites. Evaluation of this web-based platform for multi-site medical record review instruction would be valuable to researchers considering approaches to or planning abstractor training.
Web-based training is increasingly being used in educational and business settings as an effective, low-cost method to teach students and train employees. Although the literature suggests that online training is just as effective, or slightly more effective than in-person/classroom-based instruction for cognitive and procedural learning [1–3] participant-reported satisfaction levels of online instruction have been mixed [2, 3]. For example, some studies have found participants reporting higher or comparable levels of satisfaction with online training courses when compared to in-person instruction [2, 4] while other studies have found participants reporting less satisfaction with online instruction [1, 5]. Differences in satisfaction with online instruction may be explained by participants' familiarity with the subject content and prior experience with this training modality [6, 7]. Although some research has been conducted to assess the effectiveness of online medical and nurse training programs [8–11], very little literature has been published on the topic of medical record abstractor training [12, 13] and to the best of the authors' knowledge no published studies have examined participant-reported effectiveness and satisfaction with web-based training for medical record abstraction. To address this lack in the literature, we report our experience with a simultaneous web-based medical record review training session for epidemiological research purposes across six study sites. The goals of this manuscript are to describe the web-based training session, its participants and participants' evaluation of webinar technology for abstraction training.
Study Methods
The BOWII multi-site cohort study is a follow-up to the existing study cohort (BOWI) which extended data collection through five additional years of follow-up and added a comparison cohort. A detailed description of the BOWI sampling and data collection procedures has been published elsewhere [14]. BOWII included women 65+ years of age diagnosed between 1990 and 1994 with stage I or II breast cancer (N = 1405, breast cancer cases) and matched comparisons (N = 1405, women without breast cancer) followed for a maximum of 15 years. The comparison cohort was matched on breast cancer cases' age, study site, and breast cancer diagnosis year.
The study was conducted at six CRN sites in the United States of America (USA): Kaiser Permanente, Southern California; Group Health Cooperative, Seattle, Washington; Henry Ford Health System, Detroit, Michigan; HealthPartners, Minneapolis, Minnesota; Fallon Community Health Plan, Worcester, Massachusetts; Lovelace Health System, Albuquerque, New Mexico. The study protocol was approved by the institutional review board at each of the participating CRN sites. The CRN is a consortium of 14 integrated health care delivery systems with over 11 million enrollees. The overall goal of the CRN is to improve the effectiveness of preventive, curative, and supportive interventions for both major cancers and rare tumors.
Medical Record Abstraction Instrument
BOWII data collection was conducted via medical record review and focused on capture of information related to follow-up care and late treatment effects. The study's electronic data collection system (DCS2) consisted of an ACCESS database requiring direct data entry of over 600 data elements covering five content areas: 1) demographics, 2) surveillance visits, 3) surveillance mammography, 4) recurrences and/or subsequent breast cancer diagnoses and 5) comorbidities. The DCS2 captured detailed information on breast cancer cases' follow-up visits and mammography screenings such as visit dates, reason for visits, type of practitioner seen during visits and whether a clinical breast exam was performed during visits. Also captured was whether women had a recurrence and/or second primary breast cancer as well as comorbidities existing before or developed after an initial breast cancer diagnosis. Comorbidities and invasive malignancies, including breast, were captured for matched comparison subjects. Instructions on how to identify and code each data element contained in the DCS2 was thoroughly documented in the DCS2 coding manual including the data element number, definition and synonymous terms, coding ranges and directives.
Medical Record Abstractor Training
A single three-hour webinar session was held for all six data collection sites with the primary purpose of simultaneously training all study personnel and ensuring consistent approaches to abstraction across all sites. Webinar participants (hereafter referred to as participants) consisted of chart abstractors as well as other team members critical to the success of the study including study investigators and project coordinators. Participants either connected to the online training session from their own computers or from a computer set up in a conference room which projected the session onto a screen allowing simultaneous viewing by study participants.
Abstractor training was led by a single instructor who is a Registered Health Information Technician and Certified Tumor Registrar with over 10 years experience conducting research related medical record reviews including abstraction for the BOWI study. Abstractor training focused on medical record review and capture of data elements as defined in the coding manual and data-entered directly into the electronic DCS2.
Prior to the training session, participants from the sites were asked to pilot test several medical record abstractions using the DCS2. Both content and system issues identified during pilot testing were sent to the instructor prior to the webinar. These questions and resolutions were compiled into a standardized question and answer (Q&A) form and disseminated to participants for review in preparation for discussion and demonstration during the webinar.
The web-based training session involved sequential review of each data element outlined in the coding manual in conjunction with the display of the data entry field (including drop down menus and labels) in the DCS2. The instructor demonstrated how to navigate the data collection instrument and capture each of the data elements contained in the tool. The instructor then proceeded to address each question on the Q&A form, while simultaneously navigating through the DCS2 forms displayed on the screen. The instructor engaged trainees in discussion by providing helpful hints based on personal chart review experience and facilitated communication between participants by asking if they had any questions, comments and/or suggestions that they would like to share with the group. The goal of the discussion was to create a common understanding of the data elements to ensure consistency of data collection across abstractors and sites.
Post-Webinar Training Evaluation
Post-training evaluation was conducted to assess the effectiveness of the webinar modality for data collection training and participant satisfaction with the training webinar. Following the conclusion of the webinar training, participants were contacted via email requesting their feedback on the training session and provided with the link to participate in the survey. Participant evaluations were completed anonymously online using Survey Monkey© and took approximately 10 to 15 minutes per person to complete. (See Additional File 1 for participant survey.)
Post-Webinar Training Support
Approximately one week after the webinar training, the instructor conducted a follow-up conference call with each of the six sites to resolve any remaining site-specific issues or new questions that arose with the commencement of data collection. Subsequently, the instructor was available via email to answer any questions. To share resolution of issues and ensure consistency of data capture across sites, the instructor held monthly multi-site conference calls. Prior to each conference call, the instructor compiled any new questions received from the sites, as well as resolutions to these issues, and distributed the updated Q&A form for discussion during the call. The Q&A form provided documentation of issues raised and decisions made, and was used as a source of reference material for the abstractors. Conference calls were held for the first six months of data collection, with the majority of questions being addressed during the first three months of data collection. These monthly calls were discontinued after six months as new issues became infrequent and abstractors became more experienced.
Post-Webinar Technical Support
Each abstractor was issued his or her own copy of the DCS2 for data collection. Minimal technical support was required as almost all programming issues were identified and resolved during pilot testing. In the rare instance, when an abstractor did experience a technical difficulty, the problem was quickly resolved by the DCS2 developer at the lead site. Consequently, delays in data collection were minimal.
Inter-Rater Reliability
Because the BOWII study included multiple abstractors at multiple sites, inter-rater reliability measures were conducted for each abstractor within each participating site. Inter-rater reliability was done approximately three months after the commencement of data collection and after completion of a minimum of 40 medical record reviews per abstractor (20 breast cancer cases and 20 comparisons). The inter-rater reliability electronic data capture system (IRR2) was developed in ACCESS with similar front end views as the DCS2 and contained a subset of 48 key data elements capturing reasons for end-of-study follow up, breast cancer recurrence, surveillance mammography, and comorbidity data.
A sample of ten (5 breast cancer cases and 5 comparisons) medical records completed by each abstractor were randomly selected and a designated independent rater from each site re-abstracted the 48 data elements into the IRR2 system to evaluate data quality. The database was uploaded to a secure website for download and analysis by the study statistician. The re-abstracted value for each data element was compared to the originally abstracted value and percent agreement was computed for each abstracted medical record and for the entire study.