Skip to main content
  • Short Report
  • Open access
  • Published:

Knowledge of undisclosed corporate authorship (“ghostwriting”) reduces the perceived credibility of antidepressant research: a randomized vignette study with experienced nurses

Abstract

Background

There is much concern regarding undisclosed corporate authorship (“ghostwriting”) in the peer-reviewed medical literature. However, there are no studies of how disclosure of ghostwriting alone impacts the perceived credibility of research results.

Findings

We conducted a randomized vignette study with experienced nurses (n = 67), using a fictional study of antidepressant medication. The vignette described a randomized controlled trial and gave efficacy and adverse effect rates. Participants were randomly assigned to one of two authorship conditions, either (a) traditional authorship (n = 35) or (b) ghostwritten paper (n = 32), and then completed a perceived credibility scale. Our primary hypothesis was that the median perceived credibility score total would be lower in the group assigned to the ghostwritten paper. Our secondary hypotheses were that participants randomized to the ghostwritten condition would be less likely to (a) recommend the medication, and (b) want the psychiatrist in the vignette as their own clinician. We also asked respondents to estimate efficacy and adverse effect rates for the medication.

There was a statistically significant difference in perceived credibility among those assigned to the ghostwriting condition. This amounted to a difference of 9.0 points on the 35-point perceived credibility scale as tested through the Mann–Whitney U test. There was no statistically significant difference between groups in terms of recommending the medication, wanting the featured clinician as their own, or in estimates of efficacy or adverse effects (p > .05 for all such comparisons).

Conclusion

In this study, disclosure of ghostwriting resulted in lower perceived credibility ratings.

Findings

Background

There is much concern regarding conflicts-of-interest [1, 2] and authorship within the peer-reviewed medical literature, particularly in pharmaceutical industry-funded randomized controlled trials of therapeutics [37]. This has largely been sparked by a growing body of evidence from medico-legal cases demonstrating the prevalence of “ghostwriting,” [8] an undisclosed conflict-of-interest in which a pharmaceutical company employee or subcontractor co-authors a study but is not listed on the authorship byline [9]. It is now common for evidence of ghostwriting to be available on the internet for widely used blockbuster medications [1012]. Increasingly, clinicians are likely to encounter revelations of ghostwriting for established treatments both within medical journals [1114] and in the general media, such as the New York Times[15, 16].

As reported in both a prominent randomized controlled trial and a recent systematic review of the literature, financial conflicts-of-interest (COIs) are known to reduce the perceived credibility of research [17, 18]. However, little is known regarding how practicing clinicians perceive ghostwritten research. To our knowledge, there has been only one study directly related to this issue [19]. This was a study of 50 hospital-based clinicians randomly assigned to receive one of two research vignettes, one in which the author had a financial COI and in which the article had been ghostwritten, and the other in which there was no COI and traditional authorship. The research results in the COI group were much less believed by clinicians (Cohen’s d = 1.4), suggesting that disclosure of relatively common COIs had a clinically significant impact on their view of the research. The major limitation of the study was that the COI conditions were bundled (there was no “ghostwriting only” condition) so that the impact of financial COI or ghostwriting could not be identified separately. We therefore conducted a follow-up study examining the impact of ghostwriting alone on perceived credibility.

Methods

Two vignettes modified slightly from the original study were used [19]. Both vignettes contained an identical description of a fictional study of a new antidepressant (“Serovux”) for adult use. The vignettes described the study sample, methods, and results; the conclusion claimed that Serovux was a safe and effective treatment, utilizing language from a well-known antidepressant study [10, 20]. One vignette described the research as having been ghostwritten (ghostwriting condition), while the other described traditional academic authorship (non-ghostwriting condition). Other than modifying the reported COI and changing the clinical population from children to adults, no major changes were made to the vignettes used in the original experiment. We opted not to change vignette content substantially because we had found evidence of face validity in our earlier study; ghostwriting experts agreed that these vignettes described a ghostwriting incident “similar to those known to have occurred in real life” and agreed that a clinician reading the scientific literature would be “likely to come across studies that were generated in a manner similar to that described” [19]. The vignettes are available as Additional files (see Additional file 1: Vignette 1.pdf and Additional file 2: Vignette 2.pdf).

Our primary outcome measure was perceived credibility of the results, measured through a 7-item Likert scale that asked respondents to rate how truthful, accurate, credible, honest and sincere the research vignette was. Although this scale has not been formally validated in terms of psychometrics, it was derived from items within the existing perceived credibility instrument literature, and this instrument was found to be reliable (Cronbach’s alpha = 0.95) in a previous study [19]. We measured two types of secondary outcomes. First, two dichotomous questions answered in a yes/no format, [1] “If I was having problems with depression, I would like to have Dr. Harvey [The name of the psychiatrist featured in the vignette] as my psychiatrist” and “If a friend you cared about was severely depressed, would you recommend Serovux?”. Second, we asked respondents to estimate the percentage of patients who (a) benefitted significantly from Serovux, and (b) had significant side effects from Serovux.

Our primary hypothesis was that the median perceived credibility score total would be lower in the group assigned to the ghostwriting condition. Our secondary hypotheses were that participants randomized to the ghostwritten condition would be less likely to (a) recommend the medication, and (b) want the psychiatrist in the vignette as their own clinician. We also speculated that when asked to estimated efficacy and adverse effect rates, participants in the ghostwriting group would discount the reported rate of efficacy and inflate the reported rate of adverse effects.

We performed power analysis using G*Power 3.04 [21]. Guided by our previous results, we calculated that we needed 35 participants per group to detect a large difference in perceived credibility (Cohen’s d = 0.8) with 95% power. Data analysis was performed using PASW version 18.0 (SPSS, Inc, Chicago, IL, USA) and Minitab version 15.1.0.0 (Mini-tab Inc., State College, Pennsylvania, USA). Our data analysis strategy consisted of comparing total credibility scores by group using the Mann–Whitney U test, appropriate for use with ordinal data that is not normally distributed, as well as contingency table analysis for the dichotomous variables. Given our small sample size, 95% confidence intervals were to be reported to define the precision of our point estimates. Secondarily, we also planned to describe mean total credibility scores across groups and calculate effect size in the form of Cohen’s d.

Participants were recruited from two sources. First, data were collected from students in a graduate-level nurse practitioner course taking place at a large Southwestern University in a major metropolitan area. Using a random number generator (http://www.random.org), the sequence of the printed vignettes and attached instrumentation was randomized, which were then handed out to students in the classroom setting. The researcher distributing the materials was blind to their content (i.e., he did not know which vignette was distributed to each nurse). All students in the class were invited to voluntarily participate. Second, an invitation to take an on-line version of the survey was distributed to community nurses though a local email listerv. The survey was placed online using SurveyGizmo survey software, and randomization was automated. The title of the project was “Perceptions of Biomedical Research” and participants were not aware that the goal of the project was to examine the impact of ghostwriting on perceived credibility. Institutional review board approval was granted by the Arizona State University Office of Research Integrity and Assurance, and each participant provided informed consent.

Results

Sixty-seven nurses participated in this study, meaning that our study was underpowered according to our pre-study power analysis. Fifty-seven of the sixty-one nurses registered for the course participated in this study, for a response rate of 93.44%. We did not track whether the nursing students who did not participate (n = 4) refused participation or were simply not in attendance for that class session. An additional ten nurses completed the online version of the survey.

There were no missing data for the perceived credibility items. The Cronbach’s alpha for the perceived credibility scale used in the present study was 0.91. A principal components analysis (PCA) extracted only one component thus establishing that our 5-question credibility scale was a unidimensional measure of credibility. The two questions which asked about the efficacy/side effect rate of Serovux had 7.5% and 9.0% missing data, respectively, while the remaining variables all had minimal missing data (<5%). Given the small amount of missing data, we performed complete-case analyses [22].

The majority of participants were female, Caucasian, and were experienced clinicians (see Table 1). Respondents randomized to the non-ghostwriting vignette had a mean credibility score of 22.00 ±5.87, while those assigned to the ghostwriting vignette had a mean credibility score of 15.37 ±7.85. This results in a difference in perceived credibility of 6.63 points, a Cohen’s d of 0.96. Since the data did not meet the assumption of normality necessary for parametric tests and the credibility scale is ordinal, we tested the difference using the Mann–Whitney U statistic [23]. This resulted in a difference between medians of 9.0 points (95% CI = 4.0 - 12.0; see Table 2).

Table 1 Participant demographics by vignette group*
Table 2 Credibility ratings by ghostwriting condition

There were no statistically significant differences across groups on any of the remaining variables. Respondents randomized to the ghostwriting condition were less likely to want the psychiatrist in the vignette as their own clinician (OR = 0.34, 95% CI = 0.10 - 1.24), or to recommend the antidepressant (“Serovux”) featured in the vignette (OR = 0.44, 95% CI = 0.13 - 1.49), but these results were not statistically significant. The mean estimated efficacy rate for Serovux was 53.53% ±14.12 in the non-ghostwriting condition and 47.50% ±24.59 in the ghostwriting condition, and the mean estimated side effect rate was 11.85% ±7.86 in the non-ghostwriting condition and 13.24% ±10.65 in the ghostwriting condition, with identical median values across both vignette conditions (p > .05 for all these comparisons).

We had combined two subsamples for analysis, nurses taking a graduate-level class (n = 57) and nurses participating in a local email listserv for community nurses (n = 10). As a separate analysis, we excluded the nurses reached through the email listserv (n = 10) and re-analyzed the data. The results were essentially the same (i.e., <0.5 points difference in overall perceived credibility ratings).

Discussion

Experienced nurses (n = 67) who read a fictional antidepressant study found the vignette less credible when informed that the research was ghostwritten. The difference was statistically significant (p < .001), consisting of a 9-point difference (95% CI, 4.00 -12.00) between medians on a 35-point scale. There was a difference of 6.63 points in mean scores, corresponding to a Cohen’s d of 0.96. By normal social science standards, this is considered a large effect size [24]. To our knowledge, this is the first published study to find an impact of ghostwriting alone on perceived credibility.

This finding has several potential implications. The previous study [19] used vignettes reporting a pediatric antidepressant study, raising the possibility that respondents were reacting to the issue of pediatric psychiatric prescribing, a controversial issue in some quarters. The present study reached similar findings while reporting an adult antidepressant study, suggesting that the results of these studies are not dependent on the population described. These results also may inform the ongoing debate regarding the ability of clinicians to engage in evidence-based practice within an environment of less-than-ideal transparency in authorship, selective reporting of data, and so forth. In short, activities such as ghostwriting and selective reporting have ramifications downstream for the clinicians who rely on such data to practice ethically and scientifically [2528]. Although this is somewhat speculative, the large effect size found in this study could be interpreted as indicating that practicing clinicians consider ghost authorship an important issue worthy of attention and regulation. Finally, legal scholars are now exploring the issue of ghostwriting, and our findings may have ramifications for this body of literature as well [29, 30].

However, our finding that disclosure of ghostwriting is associated with lower perceived credibility also raises many questions that cannot be answered by the present study. As suggested by two anonymous reviewers, these include: Why do the nurses perceive ghostwritten research as less credible? Are nurses aware of the problems that ghostwriting presents to the integrity of science, or are they reacting to some other factor? Is the primary issue the challenge to evidence-based medicine, a reaction to a deceptive practice, or hostility to the pharmaceutical industry? Are nurses aware of the International Committee of Medical Journal Editors (ICMJE) authorship standards, which were violated in this study? Is it reasonable to expect any seasoned health professional to form an opinion of a medication or clinician based on one short vignette? These are all research questions worthy of further research, and many are probably best addressed through qualitative inquiry.

After completing two research studies on ghostwriting, one potential explanation for the finding of reduced credibility has emerged, albeit subjectively, anecdotally, and speculatively. During our data collection, we encountered anecdotal evidence suggesting that practicing clinicians are not generally aware of the practice of ghostwriting (e.g., from written comments on the instrumentation and verbal comments spoken to the investigators as subjects returned their instruments). If this is broadly true, a portion of the reduced credibility may be due to a disconnect between how clinicians envision research and authorship and the process described in the ghostwriting vignette condition. Future investigations on this topic might consider this hypothesis.

Although the nurses in our study found ghostwritten research less credible, the other variables we examined had smaller effect sizes. Subjects randomized to the ghostwriting condition were less likely to want the psychiatrist in the vignette as their own clinician or to recommend the antidepressant for a depressed friend, but this difference was not statistically significant. This is partially a result of low statistical power for these variables, as we powered the study to examine our primary outcome, which we assumed had a large effect size. However, there were other nuances in these data. For both of these questions, only 10/34 (29.41%) participants in the non-ghostwriting group endorsed the psychiatrist and antidepressant portrayed in the vignette. We consider this a low level of endorsement given that the non-ghostwriting vignette presented a rigorous clinical trial similar to many well-cited reports of real-world antidepressant research. This skepticism may reflect growing public dissatisfaction with the pharmaceutical industry [31], knowledge of the contemporary debate regarding antidepressant medications [3235], or unknown factors.

However, these findings are also suggestive of the complexities involved in moving from perceived credibility (a fairly straightforward outcome measure) to decision making. Although subjects randomized to the ghostwriting condition had lower credibility scores, they did not substantially discount the efficacy of the medication nor inflate the rate of adverse effects. In other words, they found the ghostwritten study less credible; but when asked to act or report on the data given to them, many arguably answered as if they did indeed believe the results. This may reflect defensive decision making on the part of prescribers [36], or the use of heuristics [37] in decision making.

Our study has several limitations that should shape interpretation of the results. Most of the nurses in the study were pursuing an advanced degree and thus may not be representative of practicing nurses in general. Only nurses from one geographic locale were surveyed. Also, this was a small study only capable of capturing large effect sizes. Future studies should utilize larger sample sizes as well as include participants from multiple geographic locales, as well as other health professionals (e.g., bioethicists, psychiatrists, etc.).

The majority of our participants were taking a graduate-level class on evidence-based nursing, which may have had an impact on our results. Graduate programs generally strive to teach research within the context of critical thinking. It is possible that the students in our sample were more critically minded than the average clinician, a hypothesis congruent with other observations of graduate students [38, 39]. However, previous vignette research [19] examining practicing clinicians’ (not graduate students) perceptions has found lower credibility ratings associated with COI.

In conclusion, we find that disclosure of ghostwriting alone, without other accompanying COI, is sufficient to lower the perceived credibility of psychiatric research to an important degree among professional nurses. This finding is congruent with our previous study finding that a package of relatively common COI, including ghostwriting, also lowered the perceived credibility of psychiatric research. Moving forward, we suggest that qualitative research exploring the thought processes behind participants’ ratings and treatment decisions would be informative.

Abbreviations

COI:

Conflict-of-interest

CI:

Confidence interval

OR:

Odds ratio

PCA:

Principal component analysis.

References

  1. McGauran N, Wieseler B, Kreis J, Schüler Y-B, Kölsch H, Kaiser T: Reporting bias in medical research - a narrative review. Trials. 2010, 11 (1): 37-10.1186/1745-6215-11-37.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Rochon PA, Sekeres M, Hoey J, Lexchin J, Ferris LE, Moher D, et al: Investigator experiences with financial conflicts of interest in clinical trials. Trials. BioMed Central Ltd. 2011, 12 (1): 9-

    Google Scholar 

  3. Wislar JS, Flanagin A, Fontanarosa PB, DeAngelis CD: Honorary and ghost authorship in high impact biomedical journals: a cross sectional survey. BMJ. 2011, 343 (oct25 1): d6128-d6128-

    Article  Google Scholar 

  4. Moffatt B, Elliott C: Ghost marketing: Pharmaceutical companies and ghostwritten journal articles. Perspectives in Biology and Medicine. Johns Hopkins Univ Press. 2007, 50 (1): 18-31.

    Google Scholar 

  5. Fugh-Berman A, Dodgson SJ: Ethical considerations of publication planning in the pharmaceutical industry. Open Med. 2008, 2 (4): e121-e124.

    PubMed  PubMed Central  Google Scholar 

  6. Sismondo S: Ghost Management: How much of the medical literature is shaped behind the scenes by the pharmaceutical industry?. PLoS Med. 2007, 4 (9): e286-10.1371/journal.pmed.0040286.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Sismondo S, Doucet M: Publication ethics and the ghost management of medical publication. Bioethics. 2010, 24 (6): 273-283.

    Article  PubMed  Google Scholar 

  8. Fugh-Berman AJ: The haunting of medical journals: How ghostwriting sold “HRT.”. PLoS Med. 2010, 7 (9): e1000335-10.1371/journal.pmed.1000335.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Leo J, Lacasse JR, Cimino AN: Why does academic medicine allow ghostwriting? A prescription for reform. Society. 2011, 48: 371-375. 10.1007/s12115-011-9455-2.

    Article  Google Scholar 

  10. Jureidini J, McHenry L, Mansfield P: Clinical trials and drug promotion: Selective reporting of study 329. Int J Risk Saf Med. 2008, 20 (1): 73-81.

    Google Scholar 

  11. Healy D, Cattell D: Interface between authorship, industry and science in the domain of therapeutics. Br J Psychiatry. 2003 Jul, 183: 22-27. 10.1192/bjp.183.1.22.

    Article  PubMed  Google Scholar 

  12. Lacasse JR, Leo J: Ghostwriting at elite academic medical centers in the United States. PLoS Med. 2010, 7 (2): e1000230-10.1371/journal.pmed.1000230.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Spielmans GI, Parry PI: From evidence-based medicine to marketing-based medicine: Evidence from internal industry documents. Bioethical Inquiry. 2010, 7 (1): 13-29. 10.1007/s11673-010-9208-8.

    Article  Google Scholar 

  14. Ross J, Hill K, Egilman DS, Krumholz HM: Guest authorship and ghostwriting in publications related to rofecoxib. JAMA. 2008, 299 (15): 1800-1812. 10.1001/jama.299.15.1800.

    Article  PubMed  CAS  Google Scholar 

  15. Wilson D, Singer N: Ghostwriting widespread in medical journals, study says [Internet]. nytimes.com. [cited 2011 Nov. 8]. Available from: http://www.nytimes.com/2009/09/11/business/11ghost.html

  16. Singer N: Ghostwriters paid by Wyeth aided Its drugs - NYTimes.com [Internet]. nytimes.com. cited 2011 Nov. 8]. Available from: http://www.nytimes.com/2009/08/05/health/research/05ghost.html?pagewanted=all.

  17. Schroter S, Morris J, Chaudhry S, Smith R, Barratt H: Does the type of competing interest statement affect readers' perceptions of the credibility of research? Randomised trial. BMJ. 2004, 328 (7442): 742-743. 10.1136/bmj.38035.705185.F6.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Licurse A, Barber E, Joffe S, Gross C: The impact of disclosing financial ties in research and clinical care: a systematic review. Arch Intern Med. 2010, 170 (8): 675-682.

    Article  PubMed  Google Scholar 

  19. Lacasse JR, Leo J: Knowledge of ghostwriting and financial conflicts-of-interest reduces the perceived credibility of biomedical research. BMC Res Notes. 2011, 4 (1): 27-10.1186/1756-0500-4-27.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Keller M, Ryan N, Strober M, Klein RG, Kutcher SP, Birmaher B, et al: Efficacy of paroxetine in the treatment of adolescent major depression: a randomized, controlled trial. J Am Acad Child Adolesc Psychiatry. 2001, 40 (7): 762-772. 10.1097/00004583-200107000-00010.

    Article  PubMed  CAS  Google Scholar 

  21. Faul F, Erdfelder E, Lang AG, Buchner A: G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Beh Res Meth. 2007, 39: 175-191. 10.3758/BF03193146.

    Article  Google Scholar 

  22. Desai M, Esserman DA, Gammon MD, Terry MB: The use of complete-case and multiple imputation-based analyses in molecular epidemiology studies that assess interaction effects. Epidemiologic Perspectives & Innovations. 2011, 8 (1): 5-10.1186/1742-5573-8-5.

    Article  Google Scholar 

  23. Sprent P, Smeeton NC: Applied nonparametric statistical methods. 2007, Boca Raton, FL: Chapman & Hall, 4

    Google Scholar 

  24. Cohen J: Quantitative methods in psychology: A power primer. Psychol Bull. 1992, 112 (1): 155-159.

    Article  PubMed  CAS  Google Scholar 

  25. Brody H: Hooked: Ethics, the medical profession, and the pharmaceutical industry. 2006, Rowman & Littlefield

    Google Scholar 

  26. Healy D: Shaping the intimate: Influences on the experience of everyday nerves. Soc Stud Sci. 2004, 34 (2): 219-245. 10.1177/0306312704042620.

    Article  PubMed  Google Scholar 

  27. Gottstein J: Ethical and moral obligations arising from revelations of pharmaceutical company dissembling. Ethic Hum Psychol Psych. 2010, 12 (1): 22-29. 10.1891/1559-4343.12.1.22.

    Article  Google Scholar 

  28. Gomory T, Wong SE, David C, Lacasse JR: Clinical social work & the biomedical industrial complex. J Sociol Soc Welf. 2011, 38 (4): 135-165.

    Google Scholar 

  29. Stern S, Lemmens T: Legal remedies for medical ghostwriting: imposing fraud liability on guest authors of ghostwritten articles. PLoS Med. 2011, 8 (8): e1001070-10.1371/journal.pmed.1001070.

    Article  PubMed Central  Google Scholar 

  30. Bosch X, Esfandiari B, McHenry L: Challenging medical ghostwriting in US courts. PLoS Med. 2012, 9 (1): e1001163-10.1371/journal.pmed.1001163.

    Article  PubMed  PubMed Central  Google Scholar 

  31. USA Today, Kaiser Family Foundation, Harvard School of Public Health: The public on prescription drugs and pharmaceutical companies [Internet]. kff.org. [cited. 2011, Available from: http://www.kff.org/kaiserpolls/upload/7748.pdf., Nov. 8]

    Google Scholar 

  32. Turner E, Matthews A, Linardatos E, Tell RA, Rosenthal R: Selective publication of antidepressant trials and its influence on apparent efficacy. NEJM. 2008, 358 (3): 252-260. 10.1056/NEJMsa065779.

    Article  PubMed  CAS  Google Scholar 

  33. Kirsch I, Deacon B, Huedo-Medina T, Scoboria A: Initial severity and antidepressant benefits: a meta-analysis of data submitted to the Food and Drug Administration. PLoS Med. 2008, 5 (2): e45-10.1371/journal.pmed.0050045.

    Article  PubMed  PubMed Central  Google Scholar 

  34. Trivedi M, Rush A: Evaluation of outcomes with citalopram for depression using measurement-based care in STAR* D: implications for clinical practice. Am J Psychiatry. 2006, 163: 28-40. 10.1176/appi.ajp.163.1.28.

    Article  PubMed  Google Scholar 

  35. Lacasse JR, Leo J: Questionable advertising of psychotropic medications and disease mongering. PLoS Med. 2006, 3 (7): e321-10.1371/journal.pmed.0030321.

    Article  PubMed  PubMed Central  Google Scholar 

  36. Gigerenzer G, Gray JAM: Launching the century of the patient. Better doctors, better patients, better decisions: Envisioning health care 2020. 2011, Cambridge, MA: MIT Press

    Google Scholar 

  37. Gigerenzer G, Gaissmaier W: Heuristic decision making. Annu Rev Psychol. 2011, 62 (1): 451-482. 10.1146/annurev-psych-120709-145346.

    Article  PubMed  Google Scholar 

  38. Onwuegbuzie AJ: Critical thinking skills: A comparison of doctoral and masters-level students. Coll Stud J. 35 (3): 477-451.

  39. Walbot V: Are we training pit bulls to review our manuscripts?. J Biol. 2009, 8 (3): 24-10.1186/jbiol125.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgments

We acknowledge the helpful assistance of Joseph Anson.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jeffrey R Lacasse.

Additional information

Competing interests

JRL and JL are members of Healthy Skepticism, an international non-profit organization dedicated to reducing harm from misleading drug promotion. JRL serves on the Scientific Advisory Council of the Foundation for Excellence in Mental Health Care.

Authors’ contributions

Conceived the study: JRL. Collected the data: JRL KFB. Analyzed the data and wrote first draft: JRL. Participated in writing the paper: ANC JL JRL KFB MDC. All authors read and approved final version of manuscript: ANC JL JRL KFB MDC.

Electronic supplementary material

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Lacasse, J.R., Leo, J., Cimino, A.N. et al. Knowledge of undisclosed corporate authorship (“ghostwriting”) reduces the perceived credibility of antidepressant research: a randomized vignette study with experienced nurses. BMC Res Notes 5, 490 (2012). https://doi.org/10.1186/1756-0500-5-490

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1756-0500-5-490

Keywords