- Short Report
- Open Access
Effective implementation of research into practice: an overview of systematic reviews of the health literature
BMC Research Notes volume 4, Article number: 212 (2011)
The gap between research findings and clinical practice is well documented and a range of interventions has been developed to increase the implementation of research into clinical practice.
A review of systematic reviews of the effectiveness of interventions designed to increase the use of research in clinical practice. A search for relevant systematic reviews was conducted of Medline and the Cochrane Database of Reviews 1998-2009. 13 systematic reviews containing 313 primary studies were included. Four strategy types are identified: audit and feedback; computerised decision support; opinion leaders; and multifaceted interventions. Nine of the reviews reported on multifaceted interventions. This review highlights the small effects of single interventions such as audit and feedback, computerised decision support and opinion leaders. Systematic reviews of multifaceted interventions claim an improvement in effectiveness over single interventions, with effect sizes ranging from small to moderate. This review found that a number of published systematic reviews fail to state whether the recommended practice change is based on the best available research evidence.
This overview of systematic reviews updates the body of knowledge relating to the effectiveness of key mechanisms for improving clinical practice and service development. Multifaceted interventions are more likely to improve practice than single interventions such as audit and feedback. This review identified a small literature focusing explicitly on getting research evidence into clinical practice. It emphasizes the importance of ensuring that primary studies and systematic reviews are precise about the extent to which the reported interventions focus on changing practice based on research evidence (as opposed to other information codified in guidelines and education materials).
Despite significant investment in health research, challenges remain in translating this research into policies and practices that improve patient care. The gap between research findings and clinical practice is well documented [1, 2] and a range of interventions has been developed to increase the implementation of research into health policy and practice. In particular, clinical guidelines, audit and feedback, continuing professional education and financial incentives are widely used and have been extensively evaluated .
Systematic reviews of existing research provide a rigorous method for assessing the relative effectiveness of different interventions that seek to implement research evidence into healthcare practice. A review by Grimshaw et al.  identified a range of strategies for changing provider behaviour ranging from educational interventions, audit and feedback, computerised decision support to financial incentives and combined interventions. The authors concluded that all the interventions had the potential to promote the uptake of evidence in practice, although no one intervention seemed to be more effective than the others in all settings.
This overview of systematic reviews of the health literature on the effectiveness of currently used implementation methods in translating research findings in to practice provides a focused update of Grimshaw et al.'s 2001 review. We detect a growing assumption that interventions designed to improve clinical practice and service development are always based on the best quality evidence, something that the pioneers of evidence-based medicine went to great lengths to point out was not (and was never likely to be) the case. We investigate whether any methods were effective in implementing research evidence. We excluded systematic reviews focusing on achieving change that were not sufficiently explicit about their evidence base. We want to know the effectiveness of implementation methods in translating evidence-based findings into practice as opposed to other non-evidenced-based changes.
We searched Medline and the Cochrane Database of Reviews 1998-2009 using the search strategy employed by Grimshaw et al. . Searches from 1966-July 1998 were completed by Grimshaw et al for their earlier review . Full details of the data extraction process are given in Figure 1.
Systematic reviews are conducted to a set of consistent, transparent quality standards. As such, only systematic reviews were included in the review. In line with Franke et al.  reviews were considered to be systematic reviews if they met at least two of the following criteria: search terms were included; search included Pubmed/Medline; the methodological quality of the studies was assessed as part of the review. We included reviews that focused on the implementation of research evidence into practice. Study populations included healthcare providers and patients. Numerous interventions were assessed including clinical guidelines, audit and feedback, continuing professional education, financial incentives, use of opinion leaders and multifaceted interventions. Some systematic reviews included comparisons of different interventions, other reviews compared one type of intervention against a control group. Outcomes related to improvements in process or patient well-being. Numerous individual study types (RCT, CCT, BA, ITS) were included within the systematic reviews (see Additional file 1).
We excluded systematic reviews that did not look explicitly at interventions designed to get research evidence into practice . However, this is far from straightforward in the field of healthcare where the principle of evidence-based practice is widely acknowledged and tools to change behaviour such as guidelines are often seen to be an implicit codification of evidence, despite the fact that this is not always the case . Systematic reviews that explored changes in provider behaviour, but did not state that the changes were research based were excluded. Systematic reviews were excluded that made no mention of research evidence as were papers that were unclear about the use of research evidence [8, 9]. Studies that focused on evidence-based interventions, but failed to report on the evidence base were also excluded. One systematic review was also excluded as it focused on changing patient rather than provider behaviour  and a second was excluded as it had no demonstrable outcomes .
Fifty-eight systematic reviews were read by members of the research team (either AF and AB or AF and JB) and 45 of these were excluded following discussion between all three members of the team in order to reduce the risk of bias. Thirty-one systematic reviews were excluded because they did not look explicitly at interventions designed to get research evidence into practice. Six systematic reviews were excluded because of unclear information about the evidence base of some individual studies within the review. In 5 cases it was clear that individual studies within the reviews focused on non-evidence-based changes such as cost reduction in prescribing practice and therefore these reviews were not exclusively based on getting research findings into practice. Three further papers were excluded as they were overviews. Additional file 2 provides bibliographic details and reasons for exclusion for the 45 excluded reviews.
We identified 13 systematic reviews that met the inclusion criteria. The systematic reviews contained a range of 10-66 primary studies. Of the 313 primary studies in the 13 systematic reviews, there were only 21 duplications. In the systematic reviews, Randomised Controlled Trials (RCT) were favoured by the authors over non-RCT study designs, however, non-randomised Controlled Clinical Trials (CCT), Before and After (B/A) and Interrupted Time Series (ITS) studies were also included. Several systematic reviews covered more than one clinical specialty while others focused on a specific area including prescribing, psychiatric care, pneumonia, obstetrics, stroke care and diabetes care. The original papers were all published in English, 5 came from Canada, 2 from Australia, 2 from the UK, one each from France, Germany, Italy and the USA.
The methodological quality of the systematic reviews was assessed by two members of the research team (either AF and AB or AF and JB) using an established quality checklist adapted by Franke et al from Oxman and Guyatt  using a scale of 0 (poor quality) to 7 (high quality). In most cases there was agreement between the two assessors. Where significant differences arose they were resolved by discussion between all three members of the review team. Nine of the systematic reviews received a maximum quality score i.e. 7 [12–20]. One systematic review received a score of 6 , two systematic reviews received a score of 5 [22, 23] and one systematic review scored a 4 . Further details are available in Additional file 3 in relation to each included systematic review. However, flaws identified within these systematic reviews related to: lack of clarity of search methods, lack of comprehensiveness of search methods, potential bias in the selection of articles and failure to report the methods used to combine the findings of the selected articles. The quality scores are listed in Table 1.
Table 1 shows the included studies with their quality scores, number of included studies and conclusions grouped by strategy types, which are drawn from EPOC implementation types. The authors of these reviews did not always report effect sizes and when they did they were descriptive (e.g. moderate or small) rather than numerical. The systematic reviews identify four strategy types; audit and feedback, computerised decision support, use of opinion leaders and multifaceted interventions that are considered in turn below. Multi-faceted interventions include more than one type of implementation strategy (including incentives, audit and feedback, educational strategies and reminders).
Audit and feedback
One study looked at the effects of audit and feedback . Prescribing and preventive care seem most likely to be altered by these approaches. More complex areas such as disease management, adherence to guidelines and diagnosis appear less effected by audit and feedback. The authors suggest that this may be due to the differences in complexity of the levels of decision making for clinicians in these respective facets of care.
Computerised decision support
Two studies focused on computerised decision support. One review suggested that research evidence in the form of computer guidance may give clinicians greater confidence when prescribing and lead to more effective prescribing practice. A second review lamented the lack of high-quality primary studies demonstrating improvements in patient outcomes and the poor descriptive value of many studies which make learning lessons for implementation difficult . However, the authors cautioned that the findings were based on a small number of studies and that the overall quality of these was low .
Use of opinion leaders
One review looked at the role of local opinion leaders . The authors suggest that opinion leaders can successfully promote evidence-based practice, however, the difficulty of identifying opinion leaders and the labour intensive nature of assessing their impact may limit the use of opinion leaders as a knowledge transfer intervention.
The majority of the reviews incorporated studies that focused on more than one intervention type across a variety of clinical areas. Examples of the interventions in one multi-faceted approach included: physician and public education, physician peer review and incentive payments to physicians and hospitals. The most consistent message is that interventions designed to promote the use of evidence in policy are more effective when delivered as part of multifaceted intervention that combine different approaches [16, 18–21, 23, 24], though the effect is characterised as small to moderate .
A further rationale for multifaceted interventions is that practitioners respond differently to varying types of interventions. For example, one of the reviews  investigated whether particular interventions were effective in promoting the use of evidence in obstetrics. They concluded that, in obstetrics, nurses were more receptive to educational strategies than physicians, whilst audit and feedback are effective for both groups.
This overview of systematic reviews, with its specific focus on evidence-based interventions, highlights a major limitation of existing reviews and primary studies in contributing to the effectiveness of Evidence-Based Medicine. This review emphasises the importance of ensuring that primary studies and systematic reviews are precise about the extent to which interventions are focused on changing practice based on evidence (as opposed to other information codified in guidelines, education material, etc.) The review identified very few systematic reviews looking exclusively and explicitly at implementing research findings into practice; conversely 43 reviews either focused on the implementation of non-evidenced based findings or were not explicit about the nature of the findings and were thus excluded.
This overview of systematic reviews updates the existing body of knowledge relating to the effectiveness of key mechanisms for improving clinical practice and service development [25, 26]. The 13 studies included in this overview of systematic reviews highlights the small effects of single interventions such as audit and feedback, computerised decision support and opinion leaders. Multifaceted interventions are frequently used to promote the use of research in practice. Systematic reviews of multifaceted interventions claim an improvement in effectiveness over single interventions, with effect sizes ranging from small to moderate.
The EPOC group within the Cochrane Collaboration has made a particularly significant contribution in producing reviews relating to mechanisms such as audit and feedback , opinion leaders , and computerised advice . Previous syntheses of existing reviews [1, 4, 28] have identified a large literature focused on changing practice, such as changing prescribing behaviour and service reorganizations. The literature focuses on a specific set of interventions that includes audit, clinical guidelines, opinion leaders and education and feedback. These interventions have been extensively evaluated in randomized controlled trials. The reviewers concluded that promoting the use of evidence in practice requires a complex, multifaceted intervention. While guidelines, feedback and educational interventions achieve small to moderate impacts in isolation, they are far more effective when combined in multiple strategies.
The challenges of achieving a more evidence-based approach to medical practice have been widely reported [29, 30]. We have found that a number of published studies fail to state whether the recommended practice change is based on the best available research evidence. If this is not clearly stated in research papers it is not safe to assume this is the case. Furthermore, such an approach would run contrary to the principles of evidence-based medicine. Without being precise in this important matter we are in danger of assuming that all interventions designed to improve healthcare are implicitly evidence based, without research to support this hypothesis. Transparency and precision are critical to ensuring that evidence continues to play a key role in the development of healthcare and does not merely become shorthand for any 'desirable' change.
Comparison with previous reviews
We know from the literature on the challenges involved in promoting Evidence-Based Medicine that the principles are not universally embedded in mechanisms such as guidelines and educational materials designed to promote clinical practice and service improvement . It is therefore important that evaluations of strategies to change provider behaviour either only focus on changes that are evidenced based (not ones that are politically or financially driven) or are explicit about whether the changes are evidenced based or not.
In reporting the findings of existing primary studies, the systematic reviews point to two issues that warrant further investigation. Firstly, in order to improve the impact of research on health policy and practice, it is essential that theories are developed that reflect the diverse mechanisms involved in implementation . It can be concluded from the reviews reported here that implementation of evidence into practice requires complex interventions that need to consider issues of context and process. For example, many of the systematic reviews [16, 18–21, 23, 24] highlight the importance of multifaceted interventions to promote implementation of evidence into practice. One of the papers  signals the importance of considering what implementation mechanisms might be most effective in particular clinical contexts. Therefore, systematic reviews of effectiveness studies alone may not be sufficiently sensitive to deliver all the learning necessary to improve the use of research evidence in clinical practice and service improvement. A deeper understanding may be gained by complementing these studies with the findings from social science research that considers the important issues of context and process [32, 33]. This review identified a much smaller literature focusing explicitly on getting research evidence into practice] [12–24]. This result suggests that further studies should explore whether the nature of the behaviour change being sought (either evidence based or not) has an impact on the degree of change that occurs.
However, the existence of a relatively large, rigorously evaluated set of interventions to promote the use of research evidence provides a vital tool (albeit not the only tool) in meeting the challenge of promoting better use of evidence in practice to improve patient care. Greater transparency and precision about the degree to which interventions are designed to promote evidence-based clinical practice and service improvement will further enhance our understanding of the progress made towards evidence-based medicine.
Limitations and strengths of this study
There are some limitations to conducting overview reviews of systematic reviews. Firstly, there are concerns about double counting individual studies included in different reviews. In this overview we have checked for this and found surprisingly little overlap. Secondly, in reviews of reviews the studies identified are unlikely to have been published in the last few years, given the fact that they have been published in both an original paper and then identified and included in a published review. Thus a review of reviews is less likely to include the very latest research as this would not be captured in existing reviews. This might have particular implications for interventions based on new technologies such as electronic reminders for clinicians. We made best efforts to overcome this by running the searches again at the end of 2009 and incorporated 2 additional studies [13, 20]. Finally, the reviewers are situated at some distance from the original studies and rely on summaries produced by others of existing primary studies. A further limitation related to the selection of systematic reviews that looked explicitly at interventions designed to get research evidence into practice. A number of systematic reviews were excluded as they were not explicit in their inclusion criteria that the studies selected were focused on promoting the use of evidence in practice. Others were excluded as they were not explicit in the main body of the text that the systematic review was focused on promoting the use of evidence in practice. These omissions may relate to reporting bias rather than the systematic reviews themselves.
However there is a considerable efficiency gain in doing a review of reviews, particularly as so much synthesis work has been done in the field already. We can learn from a wide body of work by reviewing 13 reviews of 313 individual studies. Furthermore, a coherent and tested set of interventions emerge that are highly consistent with previous studies .
Ethics committee approval was not required for this review.
JB, AB and AF developed the review protocol, AF conducted the searches with guidance from Sarah Lawson (Senior Information Specialist NHS Support KCL) and conducted the initial screening based on titles and abstracts. Full text screening was conducted by JB, AB and AF. Data extraction and quality appraisal was conducted by either JB and AF or AB and AF. The first draft of the paper was produced by AB and JB, with subsequent drafts developed by AB, JB and AF. All authors have read and approved the manuscript.
Grol R, Grimshaw J: From best evidence to best practice: effective implementation of change in patients' care.[see comment]. [Review] [103 refs]. Lancet. 2003, 362: 1225-1230. 10.1016/S0140-6736(03)14546-1.
Green LA, Seifert CM: Translation of research into practice: why we can't "just do it". J Am Board Fam Pract. 2005, 18: 541-545. 10.3122/jabfm.18.6.541.
Grimshaw J, McAuley LM, Bero L: Systematic reviews of the effectiveness of quality improvement strategies and programmes. Quality & Safety in Health Care. 2010, 12: 298-303.
Grimshaw JM, Shirran L, Thomas R, Mowatt G, Fraser C, Bero L: Changing provider behavior - An overview of systematic reviews of interventions. Medical Care. 2001, 39: II2-II45.
Francke AL, Smit MC, de Veer AJ, Mistiaen P: Factors influencing the implementation of clinical guidelines for health care professionals: a systematic meta-review. [Review] [64 refs]. BMC Medical Informatics & Decision Making. 2008, 8: 38-10.1186/1472-6947-8-38.
Thompson DS, Estabrooks CA, Scott-Findlay S, Moore K, Wallin L: Interventions aimed at increasing research use in nursing: a systematic review. Implementation Science. 2007, 2:
Shiffman RN, Liaw Y, Brandt CA, Corb GJ: Computer-based guideline implementation systems: a systematic review of functionality and effectiveness (Structured abstract). Journal of the American Medical Informatics Association. 1999, 6: 104-114. 10.1136/jamia.1999.0060104.
Ranji SR, Steinman MA, Shojania KG, Gonzales R: Interventions to reduce unnecessary antibiotic prescribing: a systematic review and quantitative analysis (Provisional abstract). Medical Care. 2008, 46: 847-862. 10.1097/MLR.0b013e318178eabd.
van der Wees PJ, Jamtvedt G, Rebbeck T, de Bie RA, Dekker J, Hendriks EJ: Multifaceted strategies may increase implementation of physiotherapy clinical guidelines: a systematic review. [Review] [36 refs]. Australian Journal of Physiotherapy. 2008, 54: 233-241.
Stead LF, Bergson G, Lancaster T: Physician advice for smoking cessation. Lancaster Tim Physician advice for smoking cessation Cochrane Database of Systematic Reviews: Reviews 2008. Edited by: Stead Lindsay F, Bergson Gillian. 2008, John Wiley & Sons, Ltd Chichester, UK, 2
Wilson A, Childs S: The effect of interventions to alter the consultation length of family physicians: a systematic review.[see comment]. [Review] [16 refs]. British Journal of General Practice. 2006, 56: 876-882.
Durieux P, Trinquart L, Colombet I, Niès J, Walton RT, Rajeswaran A: Computerized advice on drug dosage to improve prescribing practice. Durieux Pierre, Trinquart Ludovic, Colombet Isabelle, Niès Julie, Walton RT, Rajeswaran Anand, Rège Walther Myriam, Harvey Emma, Burnand Bernard Computerized advice on drug dosage to improve prescribing practice Cochrane Database of Systematic R. 2008
Mollon B, Chong J, Holbrook AM, Sung M, Thabane L, Foster G: Features predicting the success of computerized decision support for prescribing: a systematic review of randomized controlled trials. [Review] [68 refs]. BMC Medical Informatics & Decision Making. 2009, 9: 11-10.1186/1472-6947-9-11.
Doumit G, Gattellari M, Grimshaw J, O'Brien MA: Local opinion leaders: effects on professional practice and health care outcomes.[update of Cochrane Database Syst Rev. 2000;(2):CD000125; PMID: 10796491]. [Review] [54 refs]. Cochrane Database of Systematic Reviews (1). 2007, CD000125-
Davey P, Brown E, Fenelon L, Finch R, Gould I, Hartman G: Interventions to improve antibiotic prescribing practices for hospital inpatients. [Review] [133 refs]. Cochrane Database of Systematic Reviews (4):CD003543, 2005. 2005, CD003543-
Arnold SR, Straus SE: Interventions to improve antibiotic prescribing practices in ambulatory care. [Review] [175 refs]. Cochrane Database of Systematic Reviews (4). 2005, CD003539-
Hakkennes S, Dodd K: Guideline implementation in allied health professions: a systematic review of the literature. [Review] [64 refs]. Quality & Safety in Health Care. 2008, 17: 296-300. 10.1136/qshc.2007.023804.
Chaillet N, Dube E, Dugas M, Audibert F, Tourigny C, Fraser WD: Evidence-based strategies for implementing guidelines in obstetrics: a systematic review. [Review] [71 refs]. Obstetrics & Gynecology. 2006, 108: 1234-1245. 10.1097/01.AOG.0000236434.74160.8b.
Chaillet N, Dumont A: Evidence-based strategies for reducing cesarean section rates: a meta-analysis. [Review] [72 refs]. Birth. 2007, 34: 53-64. 10.1111/j.1523-536X.2006.00146.x.
de Belvis AG, Pelone F, Biasco A, Ricciardi W, Volpe M: Can primary care professionals' adherence to Evidence Based Medicine tools improve quality of care in type 2 diabetes mellitus? A systematic review. [Review] [55 refs]. Diabetes Research & Clinical Practice. 2009, 85: 119-131. 10.1016/j.diabres.2009.05.007.
Weinmann S, Koesters M, Becker T: Effects of implementation of psychiatric guidelines on provider performance and patient outcome: systematic review (Structured abstract). Acta Psychiatrica Scandinavica. 2007, 115: 420-433. 10.1111/j.1600-0447.2007.01016.x.
Bywood PT, Lunnay B, Roche AM: Strategies for facilitating change in alcohol and other drugs (AOD) professional practice: a systematic review of the effectiveness of reminders and feedback. [Review] [56 refs]. Drug & Alcohol Review. 2008, 27: 548-558. 10.1080/09595230802245535.
Kwan J, Hand P, Sandercock P: Improving the efficiency of delivery of thrombolysis for acute stroke: a systematic review. [Review] [39 refs]. Qjm. 2004, 97: 273-279. 10.1093/qjmed/hch054.
Simpson SH, Marrie TJ, Majumdar SR: Do guidelines guide pneumonia practice: a systematic review of interventions and barriers to best practice in the management of community-acquired pneumonia (Structured abstract). Respiratory Care Clinics of North America. 2005, 11: 1-13.
Greco PJ, Eisenberg JM: Changing physicians' practices. N Engl J Med. 1993, 329: 1271-1274. 10.1056/NEJM199310213291714.
Oxman A, Thomson MA, Davis DA, Haynes RB: No magic bullets: a systematic review of 102 trials of interventions to improve professional practice. CMAJ Canadian Medical Association Journal. 1995, 153: 1423-1431.
Jamtvedt G, Young JM, Kristoffersen DT, O'Brien MA, Oxman AD: Audit and feedback: effects on professional practice and health care outcomes.[update of Cochrane Database Syst Rev. 2003;(3):CD000259; PMID: 12917891]. [Review] [214 refs]. Cochrane Database of Systematic Reviews (2). 2006, CD000259-
Grimshaw JM, Shirran L, Thomas R, Mowatt G, Fraser C, Bero L: Changing provider behavior: an overview of systematic reviews of interventions. Medical Care. 2001, 39 (Suppl-45):
Haynes B, Haines A: Barriers and bridges to evidence based clinical practice. BMJ. 1998, 317: 273-276.
Dopson S, Locock L, Gabbay J, Ferlie E, Fitzgerald L: Evidence-Based Medicine and the Implementation Gap. Health. 2003, 7: 311-330.
Oxman AD, Sch++nemann HJ, Fretheim A: Improving the use of research evidence in guideline development: 12. Incorporating considerations of equity. Health Research Policy and Systems. 2006, 4:
Dopson S, Fitzgerald L: Knowledge to action? Evidence-based healthcare in context. 2005, Oxford: Oxford University Press
Angus J, Hodnett E, O'Brien-Pallas L: Implementing evidence-based nursing practice: a tale of two intrapartum nursing units. Nurs Inq. 2003, 10: 218-228. 10.1046/j.1440-1800.2003.00193.x.
Acknowledgements and Funding
** This work forms part of the European Implementation Score (EIS) project, funded by the EU 7th Framework Programme. The EU FP7 EIS project is a collaboration between King's College London, University of Florence, University of Lund, London School of Economics, University College London, the German Stroke Foundation and Charité - Universitätsmedizin Berlin. The work packages are lead by: Prof. C. Wolfe, Prof. D Inzitari, Dr J. Baeza, Prof. B. Norrving, Prof. P. Heuschmann, Prof. A. McGuire, Prof. H. Hemingway, Dr M. Wagner and Dr C. McKevitt. This paper has been written by the authors on behalf of the EIS Work Package 3 board: Dr J. Baeza, School of Social Sciences and Public Policy, King's College London, UK; Dr A. Rudd, Guy's and St. Thomas' Foundation Trust, St. Thomas' Hospital, UK; Prof. M. Giroud, Department of Neurology, University of Dijon, France; and Prof. M. Dennis, Division of Clinical Neurosciences, University of Edinburgh, UK. We thank two anonymous reviews and Professors Charles Wolfe and Naomi Fulop for their very helpful comments on a previous draft of this paper. AB acknowledges financial support from the Department of Health via the National Institute for Health Research (NIHR) comprehensive Biomedical Research Centre award to Guy's and St Thomas' NHS Foundation Trust in partnership with King's College London and King's College Hospital NHS Foundation Trust.
Declaration of Competing interests
The authors declare that they have no competing interests.
Electronic supplementary material
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
About this article
Cite this article
Boaz, A., Baeza, J. & Fraser, A. Effective implementation of research into practice: an overview of systematic reviews of the health literature. BMC Res Notes 4, 212 (2011). https://doi.org/10.1186/1756-0500-4-212
- Systematic Review
- Primary Study
- Research Evidence
- Opinion Leader
- Strategy Type