Skip to main content

Quality in intensive care units: proposal of an assessment instrument



There is an increasing need for standardized instruments for quality assessment that are able to reflect the actual conditions of the intensive care practices, especially in low and middle-income countries. The aim of this article is to describe the preparation of an instrument for quality assessment of adult intensive care services adapted to the actual conditions of intensive care in a middle-income country and comprising indicators validated in the literature.


The study consisted of five steps: (1) a literature survey; (2) a discussion with specialists by consensus method; (3) a pilot field test; (4) a description of indicators; and (5) an elaboration of the final version of the instrument. Each generated indicator was attributed a score (“out of standard” = 0; “below standard” = 1; “standard” = 2) that allowed calculation of the total score for each service assessed.


A total of 62 indicators were constructed, distributed as follows: 38 structure indicators (physical structure: 4; human resources: 14; continued education and training: 2; protocols and routines: 12; material resources: 6); 17 process indicators (safety: 7; work: 10); and seven outcome indicators. The maximum possible total score was 124.


Possible future applications of the instrument for the assessment of intensive care units that was constructed in the present study include benchmarking, multicenter studies, self-assessment of intensive care units, and evaluation of changes resulting from interventions.


Within the field of the health sciences, “to assess” means to perform a value judgment of a health program, service, intervention or any of their components in a manner that contributes to decision-making [1]. Intensive care units (ICU) are services that unite several fields of knowledge, technologies, and diagnostic and therapeutic methods; within such a context, evaluations are highly relevant [2].

One of the main uses of assessments is to promote the improvement of the care provided at a given unit. Scientific evidence derived from previous experiences with the implementation of systematic collection of quality indicators shows that it is associated with improving the performance of the corresponding services [3]. Therefore, critical evaluation of processes and outcomes is a crucial step in the improvement of health services.

Despite being widely used in the United States (US), health evaluations were not immediately accepted in Latin America, where their adoption only gained momentum starting in the 1990s. Several factors were adduced to account for that fact, including local economic and social conditions, a lack of specialized professionals, and a strong culture of authoritarianism and clientelism that profoundly continues to this day [4].

Different from the tradition of evaluation in the US, which has been historically centered on the technical quality of healthcare in the hospital setting [5], in Latin America as a whole and, more particularly, in Brazil, evaluation most often focuses on primary care and outpatient programs and services—principally, the ones corresponding to the public sector. Consequently, several relevant programs and countless features related to in-hospital care are not included in such evaluations, despite receiving the lion’s share of the healthcare budget [6].

Several countries, such as the Netherlands, France, Spain, Italy, and Germany, as well as the joint work of study groups from the European Society of Intensive Care Medicine (ESICM) and the Society of Critical Care Medicine (SCCM), have formulated quality indicators and systematized instruments to assess performance in the intensive care medicine setting [2, 3, 7,8,9]. Although many such indicators have global application, there is an increasing need for standardized local instruments for quality assessment, including indicators robustly adapted to legislation that are able to reflect the actual conditions of the practices applied in low and middle-income countries. Nevertheless, more than 50 years have passed since assessment tools have started to be applied into practice; however, the public policies supporting the formulation and routine use of standardized instruments for the assessment of healthcare services are still scarce, as are the currently available modalities for using the information gathered to date [10].

The aim of the present article is to describe the preparation of an instrument for the assessment of adult intensive care services adapted to the predominant norms in Brazil, while also including indicators validated in the international literature on this subject.

The present study is justified by the need for assessment tools capable of providing information on the actual state of healthcare practices, their relationship with current local norms and the results achieved by intensive care medicine. These authors expect that the information thus produced will be crucial for meeting new challenges, as well as for the formulation of novel strategies for healthcare improvement, by lending support to the decisions made by managers and thus improving the health care provided to the users of these services.


The preparation of the assessment instrument lasted from January to November 2013. The process of elaboration consisted of five steps: (1) a literature survey; (2) a discussion with specialists; (3) a pilot field test; (4) a description of indicators; and (5) the formulation of the final version of the instrument.

Some of the indicators included in the assessment instrument were identified through a survey of the specific literature on the subject in the PubMed and SciELO databases using the keywords “intensive care unit,” “structure,” “process,” “outcome,” and “quality”; the search was limited to articles published after 1995. Other indicators were formulated based on norms currently established in Brazil, such as Normative Instruction no. 4 and the Collegiate Board of Directors Resolution (Resolução da Diretoria Colegiada—RDC) nos. 07, 26, and 50. Criteria susceptible to measurement and representative of their corresponding attributes were formulated relative to those norms.

Whenever it was possible, the “standard” considered for each investigated attribute was its presence within or above the minimum cutoff point indicated by the norm. Whenever an attribute was not measured or was absent, it was classified as “out of standard.” The assessment instrument also included a third criterion (“below standard”) for the attributes that were present or measured but were below the minimum cutoff point established by the norm. The criteria were attributed scores (“out of standard” = 0; “below standard” = 1; “standard” = 2) so that, at the end of data collection, the total score of each investigated service was calculated, as was its performance in the various investigated dimensions. The use of terms “norm,” “criterion,” and “standard” in the present study follow Donabedian’s description [11].

Following their formulation, the indicators were divided into categories: structure, process and outcome [4]. Each category was divided into subcategories as needed: structure into the subcategories physical structure, human resources, continued education and training, protocols and routines, and material resources and process into the subcategories safety processes and work processes.

After this first stage, on September 17, 2013, in São Luis, Maranhão, Brazil, a panel of doctors bearing official accreditation in intensive care medicine and working at public or private ICUs in Maranhão met to discuss the initial version of the instrument designed for data collection. The aim of that meeting was to introduce eventual modifications and additional questions or to remove indicators as needed. Only the criteria thus reviewed that achieved 100% consensus among the specialists were retained in the document. At the end of the meeting, all the participants signed the minutes in which the results of the discussion were recorded.

Next, a pilot study was conducted in which the consensus form was applied at three ICUs located in other Brazilian states (two in the city of Rio de Janeiro, Rio de Janeiro, and the third in Florianópolis, Santa Catarina), selected based on convenience sampling (invitation and voluntary acceptation to participate). The ICU were invited taking into account the different stages of development they were in, in order to test the discrimination capability of the instrument (calibration). The pilot study also aimed to assess the need for adjustments, additional changes, inclusions, and exclusions changes, to test whether application of the instrument was feasible and time spent on filling it. For that purpose, a fill-in manual, aiming to guide researchers during interviews, was prepared.


Table 1 shows publications taken into account for the construction of the current assessment tool [2, 3, 7,8,9, 12,13,14,15,16,17,18,19,20].

Table 1 Authors, year of publication and indicators used in the construction of the assessment tool for adult intensive care units

Figure 1 shows the number of indicators at each stage of the construction process of the assessment tool.

Fig. 1
figure 1

Number of indicators at each stage of the construction process of the assessment tool

The first version of the instrument was expanded based on the indicators mentioned in the literature and those included in the norms currently in use in Brazil. That instrument included 86 indicators distributed across the various dimensions. After this first stage, the authors made a revision of all 86 previously listed indicators. That revision resulted in 26 exclusions, leaving 60 indicators to move to the next stage.

The consensus meeting included nine specialists in intensive care. At that meeting, all 60 indicators listed in the previous stage reached 100% consensus and two further indicators were added to the instrument. A total of 62 indicators was reached, 25 of them derived from literature review and 37 new ones built based on local norms. The description of one indicator was reformulated to improve its understanding and to reduce the odds of subjective interpretation by the respondents. On the dimension outcome, answer option “()—Not reported” was added for the cases in which an attribute was measured and available for consultation but was not reported to the interviewer due to administrative reasons intrinsic to each particular institution. It was further decided that responses indicating that option would not be included in the calculation of the final score of the corresponding service, and said indicator was excluded from the overall count.

Table 2 describes the distribution of each section of the final version of the instrument across the various dimensions and their maximum scores; the maximum possible score was 124.

Table 2 Distribution of scores among the various dimensions, the numbers of indicators, and the maximum scores on the quality assessment instrument

The field test showed that the quality assessment instrument was well accepted and easy to understand; the time needed to respond to it was 20–30 min on average. No additional inclusions or exclusions were made. The descriptions of 16 (25.8%) indicators were improved to make them more easily understandable by the interviewees and to increase the precision of the responses. The final version of the assessment tool is shown on Figs. 2, 3, 4, 5, 6, 7. The final description of the indicators is available for consultation as Additional file 1 (data not shown).

Fig. 2
figure 2

Final version of the quality assessment instrument

Fig. 3
figure 3

Final version of the quality assessment instrument

Fig. 4
figure 4

Final version of the quality assessment instrument

Fig. 5
figure 5

Final version of the quality assessment instrument

Fig. 6
figure 6

Final version of the quality assessment instrument

Fig. 7
figure 7

Final version of the quality assessment instrument


Compared to European countries and the US, the number of standardized instruments to evaluate healthcare services in Brazil and other developing countries is still small. However, to improve the quality of health care, tools able to measure its various dimensions accurately are necessary [21]. The present article describes the construction of an instrument for the assessment of intensive care services adapted to Brazilian norms and inclusive of indicators sanctioned in the specialized literature. In addition, new indicators based on enacted Brazilian legislation were suggested.

Quality indicators comprise one of the pillars used to make improvements in healthcare services. The systematic use of such indicators allows the detection of opportunities for improvement and deviations from pre-established standards [7]. A lack of instruments for the assessment of quality and a lack of governmental support to formulate such instruments hinder aspirations to improve the quality of health care [10].

The present study has some limitations: (a) the lack of a gold standard against which to compare the proposed instrument. (b) the application of one single approach to tool construction and data collection, whereby other techniques (field observations, reviews of clinical records, interviews with healthcare providers or users) and the participation of other actors of the health system in the tool construction are not performed. Nevertheless, valid and reliable information might also be produced when one single approach is used, and at an advantageously low cost [10]. (c) The larger number of indicators corresponding to the structure dimension compared to the process dimension, which is a function of the strong presence of norms relative to the former in Brazilian legislation. This idiosyncrasy stems from a historical concern with the instrumental quality of services at the expense of the processes. In addition, there is a great difficulty to select indicators of process that truly represent work routines in the various Brazilian regions [8]. (d) The difficulty of establishing cutoff points for the indicators that represent health care-associated infections (HAI), which is due to the lack of large-scale prevalence studies for Brazilian ICUs. (e) Possible interference in evaluation results due to interviewer or information bias. Finally, (f) “quality” is a construct, i.e. an unobservable theoretical concept and, therefore, cannot be measured directly. From this perspective, an indirect way to measure it (“proxy measure”) is the use of indicators.

Those limitations notwithstanding, the final result might be appraised positively for the following reasons: (a) the use of judicious methods in the elaboration of the instrument, including the participation of specialists in intensive care medicine and sole inclusion in the instrument of the indicators that attained 100% consensus only; (b) the inclusion of indicators already sanctioned in the international literature and the construction of others that represent the predominant norms in Brazil, thus resulting in an instrument particularly adapted to reflect the local healthcare practices; (c) the attribution of scores to the criteria, thus allowing comparison of the performance of any one service over time or to other services (benchmarking); (d) the inclusion of the criterion “below standard” broadening the scope of possible answers beyond mere presence or absence (“yes”, “no”) of the investigated attribute; (e) the division of the instruments into sections, which allows services to identify their weak points as well as opportunities for improvement relative to the assessed dimensions; and (f) its low cost, ease and short time required for application (30 min).

Use of interviews as an assessment instrument has been validated in low and middle-income countries. The sensitivity and specificity of that technique for assessing the quality of healthcare services are high compared to methods such as reviews of clinical records and direct observation, which are limited by missing data and interexaminer variability within this scenario. As a result, interviews provide an efficient means for assessment in countries whose healthcare system is not yet fully developed [22].

In a publication from 2008, Najjar-Pellet et al. described the construction and validation of an instrument to assess French ICUs, which included indicators of structure and process and use of a method similar to the one used for the instrument described in the present article [2]. The instrument formulated by those authors allows scores to be attributed to the assessed services, as the one described here does, but it differs from the latter in that it does not include outcome indicators.

The present document differs from the ones constructed in the Netherlands, in Germany and by ESICM in the possibility of attributing scores to the assessed services, while the latter only allow establishing the presence or absence of a given attribute, without any value judgment of it or of the final result of the evaluation of the investigated ICU.

Along with the construction of the assessment instrument, the authors sought to include the largest possible number of structure and process indicators for which there is documented scientific evidence relative to their correlation with, and impact on, the results to be studied [12]. This being the case, we call attention to the inclusion of some relevant indicators. One review performed in 2002 by Pronovost et al. showed that the mortality rate and length of stay decreased in the ICUs with 24-h availability of intensivists [17]. According to some scientific evidence, a higher number of nursing associates have better provision of healthcare. That finding might be measured by some indicators, such as a lower rate of readmissions in hospitals with larger numbers of nurses compared to the ones with lower numbers of such professionals [23].

In a publication from 2003, Pronovost et al. reported the results of a study in which rigorous methods were used and which showed that failure to use standardized processes, such as appropriate sedation, ventilator-associated pneumonia (VAP) prevention, gastrointestinal bleeding prophylaxis, venous thromboembolism (VTE) prophylaxis and appropriate use of blood transfusions, was associated with poorer outcomes, as was increased ICU length of stay and mortality [18]. Those findings give further support to the need for protocols specifically formulated for such clinical situations in ICUs, as well as to the relevance of appropriate adherence to them.

The inclusion of indicators representing a unit policy relative to the satisfaction of patients and permanence of relatives is based on some studies that reported an effect of those variables on the outcomes. Systematic assessment of satisfaction and increased presence of relatives in the ICUs indicates an improvement in the quality of care [24]. Those indicators might also be used to assess the quality of care and communication for a given unit [19, 21].

With regard to the process dimension, the assessment of daily multidisciplinary rounds for case discussion was considered to be highly relevant. Kim et al. showed that performance of such rounds is associated with lower mortality rates. The results of that study, published in 2010, are highly significant because they show that such effects occur, even in units without available intensivists. As a function of the scarcity of that type of doctor, and being that implementation of that modality of process involves little or no additional cost, performance of daily multidisciplinary rounds for case discussion is a high-impact strategy that ought to be adopted in critical care services, the ones in developing countries in particular [14].

Some articles emphasize the relevance of including specific outcome indicators in assessment instruments. In a meta-analysis published in 1997, Ashton et al. reported that the ICU readmission rate is a satisfactory indicator of the quality of processes related to the care provided over the course of hospital stays. Reduction of the quality of such processes is associated with up to a 55% increase in the risk of readmission [16].

Gastmeier et al. showed that participation in HAI surveillance systems is associated with significant reduction of their occurrence [25]. Thus, we might conclude that not only the HAI rates as such but also systematic collection of the corresponding data at participating ICUs and institutions are satisfactory quality indicators. Similarly, in a publication from 2008, Uçkay et al. asserted that the performance of indicators availability of protocols and surveillance of the prevalence of VAP in ICUs are satisfactory [13].

A constant challenge these authors all met, during the construction of the assessment instrument described here, was to keep a balance between the validity and reliability of the constructed indicators and the availability of and work load demanded by the collection of the corresponding data. One relevant issue that should be borne in mind is that in addition to aspects related to structure and process, the clinical condition of the patients admitted to the ICU exerts a strong influence on outcomes [3].

Many seemingly usefulness indicators in clinical practice were nonetheless excluded from the final version of the instrument. The reason for those exclusions was the rigorous methodological decision to include only the indicators that had achieved 100% consensus among the specialists. One further concern was to construct an instrument that would be easy to apply and take eventual regional differences into consideration.

The assessment instrument described here should be understood within a dynamic context. Consequently, we emphasize that this is a work still under construction. It is our view that, from the publication of this study, the scientific community and users would suggest new contributions that might be included in the instrument. In turn, such a move could improve its application in different scenarios. This is to say, indicators eventually shown to lose clinical relevance over time, or those no longer exhibiting variability, ought to be excluded. Conversely, new scientific evidence might come to indicate the need to include other indicators. Within that context, a systematic collection of quality indicators should not be understood as a goal unto itself but as a means to detect weak points in the system and opportunities for improvement [7]. The data thus gathered allow for planning actions aimed at the correction of the weak points detected.


In the present article, we described a tool specifically constructed to assess quality of ICUs. Possible future applications of that instrument include benchmarking, multicenter studies, self-assessment of participating ICUs and assessment of the changes resulting from interventions. The instrument described here was intended to be adapted to the actual conditions in Brazil and other low and middle-income countries.



European Society of Intensive Care Medicine


health care-associated infections


intensive care unit


Society of Critical Care Medicine


United States


ventilator-associated pneumonia


venous thromboembolism


  1. Contandriopoulos AP. A avaliação na área da saúde: conceitos e métodos. In: Fiocruz, editor. Avaliação em saúde. Rio de Janeiro; 1997. p. 29–48.

  2. Najjar-Pellet J, Jonquet O, Jambou P, Fabry J. Quality assessment in intensive care units: proposal for a scoring system in terms of structure and process. Intensive Care Med. 2008;34(2):278–85.

    Article  PubMed  Google Scholar 

  3. de Vos M, Graafmans W, Keesman E, Westert G, van der Voort PH. Quality measurement at intensive care units: which indicators should we use? J Crit Care. 2007;22(4):267–74.

    Article  PubMed  Google Scholar 

  4. Minayo MCS. Pesquisa Avaliativa por Triangulação de Métodos. In: Vozes, editor. Avaliação Qualitativa de Programas de Saúde Enfoques Emergentes. Petrópolis; 2006. p. 163–90.

  5. Donabedian A. The quality of care. How can it be assessed? JAMA. 1988;260(12):1743–8.

    Article  CAS  PubMed  Google Scholar 

  6. Mercado FJ, Hernándes N, Tejada LM, Springett J, Calvo A. Avaliação de Políticas e Programas de Saúde: Enfoques Emergentes na Íbero-América no Início do Século XXI. In: Vozes, editor. Avaliação Qualitativa de Programas de Saúde Enfoques Emergentes. Petrópolis; 2006. p. 22–64.

  7. Martín MC, Gil CL, de la Hoz JC, Herrejón EP, Sáez FN, Varela JB, et al. Sociedad Española de Medicina Intensiva Crítica y Unidades Coronarias—Indicadores de Calidad en el Enfermo Crítico Actualización 2011. 2011.

  8. Rhodes A, Moreno RP, Azoulay E, Capuzzo M, Chiche JD, Eddleston J, et al. Prospectively defined indicators to improve the safety and quality of care for critically ill patients: a report from the Task Force on Safety and Quality of the European Society of Intensive Care Medicine (ESICM). Intensive Care Med. 2012;38(4):598–605.

    Article  CAS  PubMed  Google Scholar 

  9. Braun JP, Mende H, Bause H, Bloos F, Geldner G, Kastrup M, et al. Quality indicators in intensive care medicine: why? Use or burden for the intensivist. Ger Med Sci. 2010;8:Doc22. doi:10.3205/000111.

    PubMed  PubMed Central  Google Scholar 

  10. Brook RH, McGlynn EA, Shekelle PG. Defining and measuring quality of care: a perspective from US researchers. Int J Qual Health Care. 2000;12(4):281–95.

    Article  CAS  PubMed  Google Scholar 

  11. Donabedian A. Criteria, norms and standards of quality: what do they mean? Am J Public Health. 1981;71(4):409–12.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  12. Berenholtz SM, Dorman T, Ngo K, Pronovost PJ. Qualitative review of intensive care unit quality indicators. J Crit Care. 2002;17(1):1–12.

    Article  PubMed  Google Scholar 

  13. Uckay I, Ahmed QA, Sax H, Pittet D. Ventilator-associated pneumonia as a quality indicator for patient safety? Clin Infect Dis. 2008;46(4):557–63.

    Article  PubMed  Google Scholar 

  14. Kim MM, Barnato AE, Angus DC, Fleisher LA, Kahn JM. The effect of multidisciplinary care teams on intensive care unit mortality. Arch Intern Med. 2010;170(4):369–76.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Dendle C, Martin RD, Cameron DR, Grabsch EA, Mayall BC, Grayson ML, et al. Staphylococcus aureus bacteraemia as a quality indicator for hospital infection control. Med J Aust. 2009;191(7):389–92.

    PubMed  Google Scholar 

  16. Ashton CM, Del Junco DJ, Souchek J, Wray NP, Mansyur CL. The association between the quality of inpatient care and early readmission: a meta-analysis of the evidence. Med Care. 1997;35(10):1044–59.

    Article  CAS  PubMed  Google Scholar 

  17. Pronovost PJ, Angus DC, Dorman T, Robinson KA, Dremsizov TT, Young TL. Physician staffing patterns and clinical outcomes in critically ill patients: a systematic review. JAMA. 2002;288(17):2151–62.

    Article  PubMed  Google Scholar 

  18. Pronovost PJ, Berenholtz SM, Ngo K, McDowell M, Holzmueller C, Haraden C, et al. Developing and pilot testing quality indicators in the intensive care unit. J Crit Care. 2003;18(3):145–55.

    Article  PubMed  Google Scholar 

  19. Neves FBCS, Dantas MP, Bitencourt AGV, Vieira PS, Magalhães LT, Teles JMM, et al. Análise da satisfação dos familiares em unidade de terapia intensiva. Revista Brasileira de Terapia Intensiva. 2009;21:32–7.

    PubMed  Google Scholar 

  20. Lobo RD, Levin AS, Oliveira MS, Gomes LM, Gobara S, Park M, et al. Evaluation of interventions to reduce catheter-associated bloodstream infection: continuous tailored education versus one basic lecture. Am J Infect Control. 2010;38(6):440–8.

    Article  PubMed  Google Scholar 

  21. Wall RJ, Engelberg RA, Downey L, Heyland DK, Curtis JR. Refinement, scoring, and validation of the family satisfaction in the intensive care unit (FS-ICU) survey. Crit Care Med. 2007;35(1):271–9.

    Article  PubMed  Google Scholar 

  22. Hermida J, Nicholas DD, Blumenfeld SN. Comparative validity of three methods for assessment of the quality of primary health care. Int J Qual Health Care. 1999;11(5):429–33.

    Article  CAS  PubMed  Google Scholar 

  23. McHugh MD, Berez J, Small DS. Hospitals with higher nurse staffing had lower odds of readmissions penalties than hospitals with lower staffing. Health Aff. 2013;32(10):1740–7.

    Article  Google Scholar 

  24. Souza SROS, Silva CA, Mello ÚM, Ferreira CN. Aplicabilidade de indicador de qualidade subjetivo em Terapia Intensiva. Revista Brasileira de Enfermagem. 2006;59:201–5.

    Article  PubMed  Google Scholar 

  25. Gastmeier P, Geffers C, Brandt C, Zuschneid I, Sohr D, Schwab F, et al. Effectiveness of a nationwide nosocomial infection surveillance system for reducing nosocomial infections. J Hosp Infect. 2006;64(1):16–22.

    Article  CAS  PubMed  Google Scholar 

Download references

Authors’ contributions

AGRC initial idea, study planning, interpretation of results, writing of manuscript, review and approval of the final version. APPM, AAMS initial idea, study planning, review and approval of the final version. LMST, RVG review and approval of the final version. All authors read and approved the final manuscript.


We thank the following colleagues for their relevant participation in the elaboration of the assessment instrument: Ana Cláudia Pinho de Carvalho, MD; Clayton Aragão Magalhães, MD; Fábio Gomes Teixeira, MD; Fernando Graça Aranha, MD; Keila Regina Santos Cruz, MD; Lea Barroso Coutinho, MD; Lívia Goreth Galvão Serejo Alvares, MD, MSc; Rosimarie Morais Salazar, MD.

Competing interests

The authors declare that they have no competing interests.

Availability of data and materials

This is a methodological description study. All dataset and documents supporting the conclusions of this article are available under the first author’s guard.

Ethics approval and consent to participate

In compliance with ethical requirements, the study was approved by the Research Ethics Committee of University Hospital, Federal University of Maranhão, Brazil, under reference number 289.199. The technical managers of the participating ICUs signed an “Authorization for Participation in a Research Project” form, and all of the volunteers who provided responses to the assessment instrument signed an informed consent form before being interviewed. The authors declare that they have no competing interests.

Participating ICUs:

  • General ICU—UDI Hospital, São Luís—MA;

  • General ICU—Hospital de Urgência e Emergência Doutor Clementino Moura, São Luís—MA;

  • General ICU 1—Hospital Dr. Carlos Macieira, São Luís—MA;

  • Cardiovascular ICU—Hospital Dr. Carlos Macieira, São Luís—MA;

  • General ICU—Hospital Geral Tarquínio Lopes Filho, São Luís—MA;

  • General ICU—Hospital São Luiz, São Luís—MA;

  • General ICU—Hospital Universitário da Universidade Federal do Maranhão, São Luís—MA;

  • Cardiovascular ICU—Hospital Universitário da Universidade Federal do Maranhão, São Luís—MA;

  • General ICU—Hospital Macrorregional de Coroatá, Coroatá—MA;

  • General ICU—Hospital SOS Cárdio, Florianópolis—SC;

  • General ICU—Hospital Copa D’Or, Rio de Janeiro—RJ;

  • General ICU—Hospital Unimed-Rio, Rio de Janeiro—RJ.


UDI Hospital Education and Research Department funded the study.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Alexandre Guilherme Ribeiro de Carvalho.

Additional file

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

de Carvalho, A.G.R., de Moraes, A.P.P., Tanaka, L.M.S. et al. Quality in intensive care units: proposal of an assessment instrument. BMC Res Notes 10, 222 (2017).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: