Skip to main content

Assessing general hospital doctors’ attitudes toward psychiatric care in multicultural settings

Abstract

Objective

Psychiatric care in general hospitals depends on collaboration with non-psychiatrist doctors. The Doctors’ Attitudes toward Collaborative Care for Mental Health (DACC-MH) is a two-factor scale designed to address this issue and validated in the UK in 2010. However, its applicability in contemporary, culturally diverse settings is unknown and therefore this study was aimed at determining its validity and consistency using data from our 2021 international study. Confirmatory and exploratory factor analyses were used, comparing results from our 2021 study (n = 889) with those from the 2010 UK study (n = 225).

Results

The DACC-MH consultation subscale, but not the management subscale, aligned with data from our larger, international study. The 2-factor model failed the Chi-square goodness of fit test (χ2(19) = 53.9, p < 0.001) but had acceptable other fit indices. While the previously identified attitudinal difference between physicians and surgeons was replicated, measurement invariance for this result could not be established. Exploratory factor analysis suggested a 6-factor model, contrasting with the 2-factor model proposed in 2010 for the UK sample. The DACC-MH scale shows significant limitations when applied to a larger, international dataset. Cultural and generational differences in doctors’ attitudes appear relevant and should be considered in assessing barriers to psychiatric care in general hospitals.

Peer Review reports

Introduction

Psychiatric comorbidity in medically ill populations is associated with increased mortality and morbidity, functional impairment, loss of productivity, longer hospital stays, and increased use of health services [1,2,3,4]. These findings highlight the potential importance of psychiatric care in general hospital settings. Consultation-liaison psychiatry (CLP) is a recognised model for providing timely and evidence based psychiatric care in general hospitals, but funding and management remain a challenge. Studies have shown that provision of CLP services varies in different hospital settings [5,6,7] and depends on the differing views of frontline clinicians who are the main source of referrals [8]. Attitudes of general hospital doctors toward CLP and other collaborative care models [9] are thus pivotal to their implementation and outcomes.

Collaborative care in this case includes two key components: mental health management by non-psychiatric health professionals and their consultation with psychiatrists [9]. Attitudes of hospital doctors toward mental health management and their willingness to consult with psychiatrists have been reported to vary by specialty, seniority, gender and cultural setting [10,11,12,13,14,15]. In 2010, Thombs and colleagues developed the 8-item Doctors’ Attitudes Toward Collaborative Care for Mental Health (DACC-MH) scale, with two hypothesised factors, management and consultation, validated by confirmatory factor analysis (CFA) [12]. Data were taken from a previous study [11] at a teaching hospital in London in 2003 (n = 225), using an original questionnaire (41 items) developed by Mayou and Smith in 1986 [10]. Thombs’ analysis tested known-groups validity and found physicians scored higher than surgeons on all 8 items, indicating more favourable attitudes toward both consultation and management.

The influence of culture on medical attitudes is illustrated by recent studies of doctors practising in various countries [13,14,15]. Since the DACC-MH scale was developed only in the UK, the aim of this study was to determine its applicability and psychometric properties in other settings.

Methods

We used data from our study [15] of 889 hospital specialists based in seven culturally distinct countries (China, New Zealand, Sri Lanka, Israel, Brazil, Russia, and the Netherlands) based on the original questionnaire [10] from which the DACC-MH was derived. The total sample of responders was characterised by equal gender distribution and a majority (51%) working as physicians/ internal medicine specialists. Statistical analyses were performed using the statistical package R [16].

In a first step, we conducted confirmatory factor analysis (CFA) of the previously reported 2-factor model to compare results from our international data with those from the 2003 UK dataset. Secondly, we used exploratory factor analysis (EFA) to determine the optimal factor structure for our international sample.

Each of the items used in both the CFA and EFA allowed respondents a binary choice with which they could either agree or disagree. Fifty-eight (6.5%) of the participants were missing at least one of the eight CFA items so a multiple imputation approach was used (see Supplementary File 1). The CFA was conducted using procedures for ordinal (including dichotomous) data.

We examined Cronbach’s alphas, a measure of internal consistency, for the consultation and management scales and compared these with Thombs’ report.

We also examined the measurement invariance of the CFA (see Supplementary File 1) and ran the 2-factor model on each country individually using the original dataset with listwise deletion.

Testing for measurement invariance of Thombs’ 2-factor model comparing physicians and surgeons was not possible because the model converged for physicians but not surgeons. We fit a model with just the consultation factor to the surgeon data and provide the results for this along with the results for the 2-factor model for physicians.

Similar to Thombs et al., we calculated consultation factor sum scores (maximum 4) for each participant, using a one-way analysis of variance to compare physicians, surgeons, and other doctors. Comparable analysis of management scale data was impossible due to low internal consistency and lack of between groups measurement invariance.

For the EFA, seven questions attracting more than 90% participant agreement were excluded, consistent with Thombs’ method, ensuring included items had sufficient variance to usefully differentiate participant attitudes. It also helped make the data more appropriate for factor analysis increasing the Kaiser-Meyer Olkin (KMO) from 0.15 to 0.63. An exception was made for the question ‘I would like to know more about what psychiatrists have to offer in the management of medical or surgical patients’. This had a percentage agreement of 90.2% but was included in the EFA because it was one of the final items in Thombs’ CFA.

To determine the suitability of the dataset for factor analysis, KMO values were examined and Bartlett’s test of sphericity performed. Examination of the scree plot and parallel analysis (using the fa.parallel function (psych package)) led us to choose 6 factors. EFAs were also done separately for each country (see Supplementary File 1).

Results

Part 1: confirmatory factor analysis (CFA)

The original Cronbach alphas for Thombs’ data were marginal at 0.67 and 0.65 for the consultation and management factors, respectively. For our data, the Cronbach alpha was comparable (0.65) for consultation but unacceptable (0.34) for management.

The indices in Table 1 indicate adequate fit of the 2-factor model to the international data; although examination of factor loadings (Table 2) shows relatively modest loadings for Management Scale items, raising concern about the applicability of this model to the data. The loadings for the Consultation Scale were better, all about 0.7 or higher, more consistent with the UK data.

Table 1 Goodness of Fit measures for the CFA
Table 2 Comparison of standardised item factor loadings and percentage of participants agreeing with each item

The CFA was also completed using only the Consultation Scale to determine if a better fit resulted. The factor loadings for these items remained roughly the same and the fit statistics improved overall (see Table 1).

The 2-factor CFA model was only able to converge for New Zealand, Brazil, Russia, and the Netherlands, hence the testing of measurement invariance only proceeded with these countries (see supplementary File 1). The four-country model with no equality constraints (to test configural invariance) was found to be on the borderline of acceptable fit (χ2(76) = 121.1, p = 0.001, CFI = 0.93, TLI = 0.90, RMSEA = 0.07, 90% CI=(0.04, 0.09)).

The single-factor consultation model for surgeons fit well (χ2 (2) = 3.40, p = 0.183, CFI = 0.99, TLI = 0.98, RMSEA = 0.07, 90% CI=(0.00, 0.19)) as did the 2-factor model for physicians (χ2(19) = 23.01, p = 0.237, CFI = 0.96, TLI = 0.93, RMSEA = 0.03, 90% CI=(0.00, 0.06)). Given these results we would not assume metric (and hence scalar) invariance between physicians and surgeons.

The mean consultation score was 3.3 for surgeons and 3.7 for physicians (p < 0.0001). Although we did not establish measurement invariance for the consultation scale, we did perform the comparison in sum scores between physicians and surgeons in order to provide a comparison to Thombs’ analysis (see Supplementary File 1).

Part 2: exploratory factor analysis (EFA)

The dataset (after removing items with greater than 90% agreement) was found to be suitable for exploratory factor analysis. The overall KMO index was 0.63 and most items (18 out of 22) had individual KMO values greater than 0.5. Bartlett’s test of sphericity was significant (χ2 (231) = 833, p < 0.0001) providing strong evidence that the matrix of correlations was not a matrix of zeros.

Table 3 Factor loadings

The 6 factors from the EFA explained 50% of the variance in the data. This 6-factor model was compared with Thombs’ 2-factor model (Table 3). Factor 1 from our data was almost the same as Thombs’ consultation factor with questions 3, 33, 35, and 39 loading onto the factor; question 28 (“I would welcome more time to talk to my patients”) rather less so. Factor 3 had some similarity to Thombs’ management factor, with questions 34 and 37 loading onto the factor, but not 11 or 31 as found in his study. The questions loading onto Factors 2 and 4–6 were examined and theoretically aligned more with attitudes towards management than consultation.

The number of factors in the individual country CFAs varied from 4 to 8 (see Supplementary File 1). Thombs’ Consultation Factor was reproduced in the Russian data with the same four questions (3, 33, 35 and 39) falling onto Factor 1, with similar results seen for Brazil, Israel, and The Netherlands. By contrast, the questions from Thombs’ Management Factor appeared with various other factors, spread across our seven countries’ EFAs.

Discussion

In testing the validity of the DACC-MH scale using CFA, our international dataset supported the consultation but not the management subscale. This finding indicates that the 2-factor model developed by Thombs et al. does not adequately describe our large, culturally diverse sample.

We used EFA to examine the factor structure in our international data, both overall and in individual country samples. In contrast to Thombs’ model, there was evidence for more than two factors in the total sample, with only the first factor aligning to Thombs consultation factor. Analysis of individual country samples also showed little consistency with Thombs’ 2-factor model and suggested the presence of other factors, however, given the small sample sizes for individual countries, it was not possible to accurately estimate these factor structures. In addition, our results are subject to the limitations inherent to an EFA. There are many methods available for factoring and rotating; each comes with advantages and disadvantages and may generate different results. There are also different ways to choose the number of factors. Hence the results from our EFA should not be considered the only plausible outcome. Since random data can produce factors in an EFA, it can be difficult to assess how much a factor reflects real-world patterns versus random variation in the data.

The DACC-MH scale was based on a questionnaire developed in 1986 [10] and used in a UK survey conducted in 2003 [11]. Although ethnic data were not reported, it is highly likely this previous sample was less culturally diverse than our international dataset [15]. In addition to culture, there may be other variables influencing the limited applicability of this scale to our data, such as changes in clinical practice, service delivery models, and medical training over the intervening years.

Previously identified differences between physicians and surgeons using sum scores [12] was replicated in our analysis. As these differences have also been observed in other studies [10,11,12,13,14,15], our findings strengthen evidence for an enduring attitudinal difference between these two medical disciplines. However, we were not able to demonstrate measurement invariance between physicians and surgeons so we could not establish that the use of Thombs’ 2-factor model was a meaningful way to compare these groups. Additionally, our assessable surgeon group included in the model was small (n = 151), drawn from multiple countries for which measurement invariance had not been demonstrated, complicating both assessment of invariance and comparison of physicians and surgeons.

The finding that the CFA was a better fit on Thombs’ data compared to the international data is unsurprising, given that the UK data was sample from which the model was derived. Thombs’ started with 15 items (5 consultation and 10 management) in the CFA and followed a process (removing items with greater than 90% agreement or with factor loadings less than 0.4), resulting in the published 2-factor model [12]. The fact that the loadings for the management subscale were significantly lower suggests that this factor may be specific to his data and not generalizable to other countries and time periods.

Conclusion

The DACC-MH scale developed in the UK has limited applicability and consistency when applied to our larger, international dataset. Cultural differences in attitudes as well as changes in service delivery models over the last decade are possible explanations. Further studies are needed to develop a better understanding of attitudes of general hospital doctors in culturally diverse settings towards management of psychiatric comorbidities. Such understanding would help to identify barriers and solutions to the provision of psychiatric care in general hospitals internationally. For example, it could enable delivery of culturally appropriate and tailored education for non-psychiatric medical professionals in order to improve practice and promote optimal psychiatric care in the general hospital setting.

Data availability

The datasets used and analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

DACC:

MH-Doctors’ attitudes toward collaborative care for mental health

CLP:

Consultation-liaison psychiatry

CFA:

Confirmatory factor analysis

EFA:

Exploratory factor analysis

CLI:

Comparative fit index

TLI:

Tucker-Lewis index

RMSEA:

Root mean square of approximation

References

  1. Ghoreishizadeh M, Namin S, Torshizi M. Prevalence of depression among general hospital surgical inpatients and its effects on the length of hospital stay. Res J Biol Sci. 2008;9:1018.

    Google Scholar 

  2. Egede L. Major depression in individuals with chronic medical disorders: prevalence, correlates and association with health resource utilization, lost productivity and functional disability. Gen Hosp Psychiatry. 2007;29(5):409–16.

    Article  PubMed  Google Scholar 

  3. Bressi SK, Marcus SC, Solomon PL. The impact of psychiatric comorbidity on general hospital length of stay. Psychiatr Q. 2006;77(3):203–9.

    Article  PubMed  Google Scholar 

  4. Cole MG. Psychiatric–Medical Comorbidity: does depression in older medical inpatients predict mortality? A systematic review. Gen Hosp Psychiatry. 2007;29:425–30.

    Article  PubMed  Google Scholar 

  5. Krautgartner M, Alexandrowicz R, Benda N, Wancata J. Need and utilization of psychiatric consultation services among general hospital inpatients. Social Psychiatry Psychiatric Epidemiol. 2006;41(4):294–301.

    Article  Google Scholar 

  6. Ruddy R, House A. A standard liaison psychiatry service structure? A study of the liaison psychiatry services within six strategic health authorities. Psychiatr Bull. 2003;27(12):457–60.

    Google Scholar 

  7. Sakhuja D, Bisson JI. Liaison psychiatry services in Wales. Psychiatr Bull. 2008;32(4):134–6.

    Article  Google Scholar 

  8. Solomons LC, Thachil A, Burgess C, Hopper A, Glen-Day V, Ranjith G, Hodgkiss A. Quality of psychiatric care in the general hospital: Referrer perceptions of an inpatient liaison psychiatry service. Gen Hosp Psychiatry. 2011;33(3):260–6.

    Article  PubMed  Google Scholar 

  9. Katon W, Unutzer J. Collaborative care models for depression: time to move from evidence to practice. Arch Intern Med. 2006;166:2304–6.

    Article  PubMed  Google Scholar 

  10. Mayou R, Smith EB. Hospital doctors’ management of psychological problems. Br J Psychiatry. 1986;148:194–7.

    Article  CAS  PubMed  Google Scholar 

  11. Morgan JF, Killoughery M. Hospital doctors’ management of psychological problems - Mayou & Smith revisited. Br J Psychiatry. 2003;182:153–7.

    Article  PubMed  Google Scholar 

  12. Thombs BD, Adeponle AB, Kirmayer LJ, Morgan JF. A brief scale to assess hospital doctors’ attitudes toward collaborative care for mental health. Can J Psychiatry. 2010;55(4):264–7.

    Article  PubMed  Google Scholar 

  13. Wang J, Wang Q, Wimalaratne IW, Menkes DB, Wang X. Chinese non-psychiatric hospital doctors’ attitudes toward management of psychological/psychiatric problems. BMC Health Serv Res. 2017;17:576.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Nauta K, Boenink AD, Wimalaratne IW, Menkes DB, Mellsop G, Broekman BFP. (2019) Attitudes of general hospital consultants towards psychosocial and psychiatric problems in Netherlands. Psychology, Health and Medicine. 2019;24(4):402–13.

  15. Wimalaratne IK, McCarthy J, Broekman BFP, et al. General hospital specialists’ attitudes toward psychiatry: a cross-sectional survey in seven countries. BMJ Open. 2021. https://doi.org/10.1136/bmjopen-2021-054173.

    Article  PubMed  PubMed Central  Google Scholar 

  16. R Core Team. (2023). _R: A Language and Environment for Statistical Computing_. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/.

Download references

Acknowledgements

We thank Associate Professor Jane McCarthy for discussions and the study coordinators and respondents in 7 participating countries for their contribution to the previous study.[15] This study constitutes part of Dr Inoka Wimalaratne’s PhD thesis at the University of Auckland.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

IW and DBM conceived of the project; JM performed the statistical analysis; IW wrote the first draft of the manuscript; all authors reviewed, edited, and approved the final manuscript.

Corresponding author

Correspondence to Inoka Wimalaratne.

Ethics declarations

Ethic approval and consent to participate

Formal ethical approval was not required since the data used in this analysis were from our 2021 international study [15] that did not involve patients, patient information, invasive procedures, or treatments.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wimalaratne, I., McLay, J. & Menkes, D.B. Assessing general hospital doctors’ attitudes toward psychiatric care in multicultural settings. BMC Res Notes 17, 125 (2024). https://doi.org/10.1186/s13104-024-06788-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13104-024-06788-7

Keywords