Skip to main content

The CalculAuthor: determining authorship using a simple-to-use, fair, objective, and transparent process

Abstract

Authorship determination on a research article remains a largely subjective process. Existing guidelines on authorship taxonomy lack objectivity and are more useful in determining who deserves authorship rather than determining the order of authors. To promote best practices in authorship taxonomy, we developed an authorship rubric that provides a fair, objective, and transparent means of crediting authorship. We christened this tool the “CalculAuthor”. The following steps are to be undertaken to create a scoring system based on the requirements of the projects: determining creditable criteria, assigning credit weightages, deciding levels of contribution, determining each author’s contribution, calculating authorship scores and ranking. These must be performed by or in close collaboration with the primary investigator (PI), with conflicts being resolved at the PI’s discretion. All team members should be informed about the authorship determination process early in the project and their agreement regarding its use must be obtained. While the CalculAuthor was developed to be used in medical research, its customizability enables it to be employed in any field of academia. We recommend that the CalculAuthor be piloted within institutions before its mainstream adoption, and any institution-specific factors should be considered to make the process more efficient and suitable.

Peer Review reports

Introduction

While the conclusion of a research project is accompanied by feelings of accomplishment stemming from the culmination of one’s hard work, there is also an expectation of being rewarded for one’s efforts with a fair authorship position. However, the grim reality of many research teams across the world is one where authorship determination remains a largely subjective process [1]. Authorship conventions vary across academic disciplines, countries, institutions, and even amongst research groups within the same discipline [2]. These nuances across disciplines are captured by the authorship guidelines published by relevant bodies within different disciplines (e.g., the International Committee of Medical Journal Editors in the biomedical sciences, the American Sociological Association in the social sciences, and the American Physical Society in physics) [3]. Some fields, primarily economics, employ alphabetic sequencing when listing authors [4]. However, this practice also leads to problematic repercussions. This norm gives an unfair advantage to researchers with last name initials that are early in the alphabet. Moreover, this “alphabetical discrimination” makes researchers wary of who they collaborate with, so as to have a higher authorship rank [5, 6]. We have captured some of these subtleties amongst the STEM (science, technology, engineering, and mathematics) fields in Table 1.

Table 1 Connotations of authorship in the various STEM fields (adapted from a community discussion on academia stack exchange [20])

Some key considerations for authorship designation were identified by Marušić et al. namely the proper definition of authorship criteria, implications of authorship sequence, and authorship practices in collaborative research projects [7]. Guidelines on authorship taxonomy in the biomedical sciences have been published by the International Committee of Medical Journal Editors (ICMJE) [8] and Contributor Roles Taxonomy (CRediT) [9]. Holcombe et al. also developed tenzing, a web application and R package that can help facilitate reporting of contributorship information in manuscripts and journal articles [10]. However, the ICMJE and CRediT frameworks lack objectivity and are more useful in determining whose contributions warrant an authorship rather than determining the order of authors. Holcombe et al. note the lack of degree of contributorship as a key limitation of CRediT and by extension, tenzing [10]. Moreover, the ICMJE criteria have been criticized as being unduly restrictive, harsh, and difficult to realistically follow [11, 12]. However, improper adherence to objective authorship criteria may give rise to unethical academic practices such as ghost authorship and guest authorship. Often, academic hierarchy and institutional seniority supersede actual contributions, with the existing system rarely being challenged. Not being suitably compensated inevitably leads to feelings of frustration, demotivation, and a distaste towards medical research as a whole [1]. The Committee on Publication Ethics (COPE) has outlined several suggestions for authors to negotiate terms of authorship and resolve authorship issues [3]. It has also stressed the responsibility of institutions and journals to recognize suboptimal authorship practices [3]. Furthermore, an approach described by Tscharntke et al. to address author contribution challenges involves explicitly indicating methods used to determine authorship, which can help avoid conflicts and increase satisfaction amongst authors [13]. However, given the ubiquity of this issue in the realm of academia, there is an urgent need to explore improvements to the existing system.

The Center for Clinical Best Practices (CCBP) at the Aga Khan University (AKU) in Pakistan is tasked with the standardization of clinical care and academic standards at AKU. In order to promote best practices in authorship taxonomy for CCBP and other institutional research projects, the CCBP team created and piloted an authorship rubric that provides a fair, objective, and transparent means of crediting authorship. In this commentary, we describe the process of development of this innovative authorship calculation algorithm, christened the “CalculAuthor”.

Approach and outcomes

Our algorithm outlines the following steps that are to be followed sequentially for each individual research project, as the creditable criteria, criteria weightages, and levels of contribution are expected to differ from project to project. These must be performed by or in close collaboration with the PI. Ideally, all team members should be informed about the authorship determination process before the commencement of a study, and their general agreement regarding its use must be obtained.

  1. 1.

    Determining Creditable Criteria for Authorship: A list of criteria, encompassing all the aspects of the research project. These criteria may be founded upon those provided by the ICMJE [8] and CRediT [9], with criteria being modified, added, or deleted as deemed necessary with respect to a particular research project. It is advisable to request the entire research team to review the list of criteria, so as to ensure that no creditable criteria have been overlooked. Moreover, a greater degree of specificity in determining creditable criteria ensures a more comprehensive process of determining the level of contribution being made with respect to the overall project for individuals working on similar aspects of the project. In other words, it is recommended that broader domains, such as “manuscript writing”, be further subdivided into well-defined tasks to ensure that appropriate credit is given for different responsibilities under the same overall domain of the project.

  2. 2.

    Assigning Credit Weightages to Criteria: Weightages amounting to a total of 100 points must be attributed to each criterion. This can be achieved by first scoring each criterion in a range of 1 (least weightage) to 10 (most weightage) points, keeping in mind the relative effort and expertise required for tasks included in each criterion. These scores can then be scaled to a total of 100 points. This scaling up to 100 points is important to achieve mathematical unity, which will dissuade from retrospective changes to the weightages of certain creditable criteria. The attribution of weightages to each of the creditable criteria should be done by the primary investigator and any other team members closely involved with project supervision and oversight, preferably at the beginning of the project. Once weightages have been assigned to each criterion, it is preferable to request the entire research team to review and provide their general agreement regarding the weightages. If, at the end of the project, any of the authors feel that the weightages of the creditable criteria warrant rethinking, the PI may choose to modify weightages and re-solicit the authors’ agreement with the new weightages.

  3. 3.

    Levels of Contribution: Levels of contribution must be decided as the most appropriate for a specific project, with each level securing a fixed percentage of the total possible points available for each criterion. At our institution, we have successfully used Major (100% of points i.e., a multiplication factor = 1), Minor/Moderate (50% of points; multiplication factor = 0.5), and No Contribution (0% of points; multiplication factor = 0). Other iterations, such as Major (100%), Moderate (50%), Minor (25%), and No (0%) Contribution could be considered. The degree of involvement that constitutes (and differentiates) specific levels of contribution is entirely dependent on the nature of each creditable criteria (e.g. number of patient records collected during data acquisition), and should thus be decided on by the team collectively at the start of a project.

  4. 4.

    Determining Each Author’s Contribution: Each author must be asked to independently categorize their level contribution for each criterion. To promote transparency, this self-scoring should take place on an online spreadsheet, such as Google Sheets or Microsoft Excel Online, shared to each author’s email. Online spreadsheets additionally possess a useful feature whereby changes made by the authors can be tracked. The PI must then review each author’s self-reported contributions for accuracy. Conflicts regarding authors’ contributions across criteria can be resolved through discussion with the PI.

  5. 5.

    Calculating Authorship Scores: An author’s authorship points for each criterion must be calculated by multiplying the total available points for each criterion by the multiplication factor of each level of contribution. Total authorship points (/100) can be obtained by adding the authorship points across each criterion. The calculation process can be easily automated using Microsoft Excel or Google Sheets formulae.

  6. 6.

    Creating Authorship Ranking: The total authorship points calculated in the previous step are arranged in descending order to obtain an authorship ranking. In the event of tied rankings due to an equal number of points, the order of authorship for the concerned authors can be left to the PI’s discretion after they have judiciously and holistically evaluated the contributions of the tied authors to the project. As per convention, the PI, if senior author, may opt to be placed at the end of the authorship list. The final authorship ranking should be reviewed by all the authors in the research team. Dissatisfaction on the part of any author(s) may be resolved through discussion with the PI. Agreement on the final authorship ranking must be recorded for all authors, preferably with their signatures.

Table 2 shows the results of the authorship determination process using the CalculAuthor for the present article.

Table 2 Results of the authorship determination process using the CalculAuthor

Piloting experience

Our team successfully piloted the CalculAuthor on 10 different research papers (including the current article) which had a total of 128 authors. Of these, 2 papers have been published in a peer-reviewed medical journal [14, 15]. Amongst these 128 authors, 22 (17.2%) were Assistant Professors, 3 (2.3%) were Associate Professors, and 17 (13.3%) were Professors. Encouragingly, 61 (47.7%) of authors were students/trainees/research associates. The remaining 25 authors (19.5%) were other clinical or research investigators. The first author position was occupied by a student/trainee/research associate in 9/10 (90%) of research papers. Moreover, 24/30 (80%) of the top three authorship positions across the 10 research papers were occupied by students/trainees/research associates. At the initiation of each project, the PI disclosed the future use of the CalculAuthor for the purposes of authorship determination and explained how each component of the CalculAuthor methodology operated. This initial debriefing took place over a virtual meeting. For each project, creditable criteria and their weightages were determined through consensus before the start of the project. All queries and concerns regarding the CalculAuthor methodology were clarified at the start of the project, as well as later on during the course of the project if any concerns arose. For the most part, the introduction of the CalculAuthor was met with general approval by project members and was viewed especially positively by junior members of the project. The only objections that arose were related to the assignment of relative weightages to the creditable criteria. However, these were resolved following group discussions, and the eventual consensuses were accepted by all authors. We received two key suggestions during the preliminary stages of the development of the CalculAuthor. Firstly, that the attribution of creditable contributions for each of the authors be performed by the PI or other project lead/supervisor, so as to minimize the additional workflow and avoid inflated self-reports of contributions. However, we chose not to incorporate this element of feedback and instead retain the self-reporting framework, as this would allow authors to be more satisfied with their eventual placement. The transparent nature of self-reporting contribution, and the PI-mediated conflict resolution mechanism, was expected to limit and resolve issues related to inflated self-reported contributions. Secondly, in the preliminary version of the CalculAuthor, the weightages for each creditable criteria were decided by group consensus at the start of the project. However, this approach resulted in frequent disagreements regarding the assigned weightages, and was time-consuming, complicated, and frequently unproductive. We received a suggestion that the allocation of weightages to each creditable criteria should be performed by the PI and project lead/supervisor themselves, with the agreement of other authors being sought after the allocation. We chose to incorporate this element of feedback and observed it to result in a much more time efficient and streamlined workflow. All authors were responsible for self-recording their contributions on an Excel workbook to which all team members had access. The PIs were responsible for regularly checking the shared workbook for accuracy of reported contributions. In general, all authors were in agreement regarding the fairness of the rankings, the transparency and objectivity of the rubric in determining authorship positions, and the weightages assigned to contributions assigned to each aspect of the research project in deciding authorship rankings. Unfortunately, we did not use any objective methods (e.g. a survey) to quantify satisfaction, agreement/disagreement, or other objections to the CalculAuthor. However, it was extremely heartening to see students, trainees, and research associates (i.e. the juniormost members of the research teams) occupying top positions in the authorship list. The automated CalculAuthor tool (Microsoft Excel spreadsheet) used for the present article is shown in Additional File 1.

Outlook

The CalculAuthor promotes consistency and objectivity in the authorship ranking process by quantifying effort across component tasks. The transparency of the process deems it fair, with a right to appeal to the PI in cases of conflict or dissatisfaction. The customizability in selecting criteria enables its application to all types of research. A noteworthy point of the CalculAuthor is that it operates under the premise that any degree of contribution towards any of the creditable criteria warrants recognition as an author. In addition, having predetermined definitions for the level of contribution to each creditable criteria also negates biases during authors’ self-reporting of their contributions (e.g. it is fairly common for authors to overestimate their contributions to a project [16]). Although efforts have been made previously to rank authors using a rubric [17], they have not considered quantifying the extent of contribution made by individuals for each criterion. This addition is particularly useful in large projects, where multiple authors may play a part in a single component task. We have outlined some existing authorship rubrics and highlighted how they differ from the CalculAuthor in Table 3. Of note, the majority of existing tools have been developed prior to the development of the CRediT taxonomy in 2015. Thus, they lack the flexibility to account for the contributorship roles described in CRediT, due to which their relevance to modern-day authorship determination is restricted. A recent review by Whetstone et al. in 2020 presented a critique of existing authorship rubrics. They concluded that a major limitation of these rubrics was their restriction of creditable criteria to only traditional roles in a project, and their inflexibility to account for contributorship in more unconventional areas such as programming and software design (which may be key aspects of contemporary projects) [18].

Table 3 What makes the CalculAuthor different from existing authorship rubrics*?

The objective ranking method levels the playing field for all authors irrespective of seniority. Women have faced more disagreements in authorship naming and ranking compared to their male counterparts and have found it difficult to plead their case [19]. A transparent process will accord the higher authorship positions to key contributors without any dispute, sparing unnecessary and uncomfortable confrontation. This method will also compel the members to play an active role if they want a higher position in the author list. In addition, our method helps to credit every effort made by authors in individual criterion, which is reflected in the final scoring, instead of simply negating minor efforts completely. This small differentiation would also help to decrease the chance of authors obtaining the same score. Furthermore, gift authorships can be dissuaded to some extent owing to the transparency of the process.

Assessing coauthors’ contributions in collaborative scientific work is challenging due to its subjective nature. Biases may arise from self-perceptions, interpersonal interactions, and power dynamics, all of which may influence credit allocation and potentially impact fair recognition. Herz et al. highlighted the necessity of mitigating biases arising from over-estimation of the amount and importance of one's own contributions to a project by promoting transparency in credit allocation [16]. Furthermore, Eggert et al. suggests fair authorship allocation by identifying all contributors, negotiating relative contributions, assigning authorship based on a specified criterion, designating a principal investigator, and disclosing the complete list of contributors in the publication [13]. The CalculAuthor provides a tool whereby all of the above is possible in a practical, feasible, and relatively uncomplicated manner.

To make the process more streamlined, we encourage future authors to introduce authors to the authorship ranking process early on in a research project and incorporate the teams’ feedback to tailor the CalculAuthor to a specific research team or project. This should be followed with an active effort to keep track of the tasks being performed by the individuals, for verification and calculation of final rank at the end. While the CalculAuthor was developed to be used in medical research, its customizability enables it to be employed in any field of academia. We recommend that the CalculAuthor be piloted within institutions before its mainstream adoption, and any institution-specific factors should be considered to make the process more efficient and suitable.

Availability of data and materials

Not applicable.

Abbreviations

PI:

Primary investigator

ICMJE:

International Committee of Medical Journal Editors

CRediT:

Contributor roles taxonomy

CCBP:

Center for Clinical Best Practices

AKU:

Aga Khan University

References

  1. Fleming N. The authorship rows that sour scientific collaborations. Nature. 2021;594(7863):459–62.

    Article  PubMed  CAS  Google Scholar 

  2. McNutt MK, Bradford M, Drazen JM, Hanson B, Howard B, Jamieson KH, et al. Transparency in authors’ contributions and responsibilities to promote integrity in scientific publication. Proc Natl Acad Sci. 2018;115(11):2557–60.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  3. Committee on Publication Ethics (COPE) Council. COPE Discussion Document: authorship. September 2019. 2020.

  4. Waltman L. An empirical analysis of the use of alphabetical authorship in scientific publishing. J Informet. 2012;6(4):700–11.

    Article  Google Scholar 

  5. Weber M. The effects of listing authors in alphabetical order: a review of the empirical evidence. Res Eval. 2018;27(3):238–45.

    Article  Google Scholar 

  6. Wohlrabe K, Bornmann L. Alphabetized co-authorship in economics reconsidered. Scientometrics. 2022;127(5):2173–93.

    Article  Google Scholar 

  7. Marušić A, Bošnjak L, Jerončić A. A systematic review of research on the meaning, ethics and practices of authorship across scholarly disciplines. PLoS ONE. 2011;6(9):e23477.

    Article  PubMed  PubMed Central  Google Scholar 

  8. International Committee of Medical Journal Editors. Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals. 2023. https://www.icmje.org/recommendations/. Accessed 19 May 2023.

  9. Contributor Roles Taxonomy (CRT). CRediT. https://credit.niso.org/. Accessed 19 May 2023.

  10. Holcombe AO, Kovacs M, Aust F, Aczel B. Documenting contributions to scholarly articles using CRediT and tenzing. PLoS ONE. 2021;15(12):e0244611.

    Article  Google Scholar 

  11. Penders B. Letter to the editor: respecting the plurality of value and the messiness of scientific practice. Account Res. 2016;23(2):136–8.

    Article  PubMed  Google Scholar 

  12. Moffatt B. Scientific authorship, pluralism, and practice. Account Res. 2018;25(4):199–211.

    Article  PubMed  Google Scholar 

  13. Eggert L. Best practices for allocating appropriate credit and responsibility to authors of multi-authored articles. Front Psychol. 2011. https://doi.org/10.3389/fpsyg.2011.00196.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Martins RS, Masood MQ, Mahmud O, Rizvi NA, Sheikh A, Islam N, et al. Adolopment of adult diabetes mellitus management guidelines for a Pakistani context: methodology and challenges. Front Endocrinol. 2022. https://doi.org/10.3389/fendo.2022.1081361.

    Article  Google Scholar 

  15. Martins RS, Hussain H, Chaudry M, Rizvi NA, Mustafa MA, Ayub B, et al. GRADE-ADOLOPMENT of clinical practice guidelines and creation of clinical pathways for the primary care management of chronic respiratory conditions in Pakistan. BMC Pulm Med. 2023;23(1):123.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Herz N, Dan O, Censor N, Bar-Haim Y. Opinion: authors overestimate their contribution to scientific work, demonstrating a strong bias. Proc Natl Acad Sci USA. 2020;117(12):6282–5.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  17. Digiusto E. Equity in authorship: a strategy for assigning credit when publishing. Soc Sci Med. 1994;38(1):55–8.

    Article  PubMed  CAS  Google Scholar 

  18. Whetstone D, Moulaison-Sandy H. Quantifying authorship: a comparison of authorship rubrics from five disciplines. Proc Assoc Inf Sci Technol. 2020;57(1):e277.

    Article  Google Scholar 

  19. Smith E, Williams-Jones B, Master Z, Larivière V, Sugimoto CR, Paul-Hus A, et al. Misconduct and misbehavior related to authorship disagreements in collaborative science. Sci Eng Ethics. 2020;26:1967–93.

    Article  PubMed  Google Scholar 

  20. What does first authorship really mean in field X? Academic stack exchange. https://academia.stackexchange.com/questions/2467/what-does-first-authorship-really-mean-in-field-x. Accessed 19 June 2023.

  21. American Mathematical Society. The culture of research and scholarship in mathematics: joint research and its publication. 2004. http://www.ams.org/profession/leaders/CultureStatement04.pdf. Accessed 19 June 2023.

  22. Sheskin TJ. An analytic hierarchy process model to apportion co-author responsibility. Sci Eng Ethics. 2006;12:555–65.

    Article  PubMed  Google Scholar 

  23. Belwalkar B, Toaddy S. Authorship determination scorecard. Washington: American Psychological Association; 2014.

    Google Scholar 

  24. Belwalkar B, Toaddy S. Authorship tie-breaker scorecard. Washington: American Psychological Association; 2014.

    Google Scholar 

  25. Winston RB Jr. A suggested procedure for determining order of authorship in research publications. J Couns Dev. 1985;63(8):515.

    Article  Google Scholar 

  26. Clement TP. Authorship matrix: a rational approach to quantify individual contributions and responsibilities in multi-author scientific articles. Sci Eng Ethics. 2014;20:345–61.

    Article  PubMed  Google Scholar 

  27. Ahmed SM, Maurana CA, Engle JA, Uddin DE, Glaus KD. A method for assigning authorship in multiauthored publications. Fam Med. 1997;29(1):42–4.

    PubMed  CAS  Google Scholar 

  28. Kosslyn S. Criteria for authorship. Cambridge: Harvard University; 2002.

    Google Scholar 

  29. Warrender JM. A simple framework for evaluating authorial contributions for scientific publications. Sci Eng Ethics. 2016;22(5):1419–30.

    Article  PubMed  Google Scholar 

  30. Schmidt RH. A worksheet for authorship of scientific articles. Bull Ecol Soc Am. 1987;68(1):8–10.

    Article  Google Scholar 

  31. Maruš.ić A, Hren D, Mansi B, Lineberry N, Bhattacharya A, Garrity M, et al. Five-step authorship framework to improve transparency in disclosing contributors to industry-sponsored clinical trial publications. BMC Med. 2014;12(1):197.

    Article  Google Scholar 

  32. Ing EB. A survey-weighted analytic hierarchy process to quantify authorship. Adv Med Educ Pract. 2021;12:1021–31.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

None.

Author information

Authors and Affiliations

Authors

Contributions

RSM and MAM initially conceptualized and designed the CalculAuthor with minor contributions from ASF and NN. The conceptualization of the manuscript was primarily done by RSM and MAM. SN supervised all aspects of the study. All authors contributed to writing and critically revising the manuscript and approving its final draft.

Corresponding author

Correspondence to Russell Seth Martins.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional File 1

. Demonstration of the CalculAuthor tool to determine authorship for the present article.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Martins, R.S., Mustafa, M.A., Fatimi, A.S. et al. The CalculAuthor: determining authorship using a simple-to-use, fair, objective, and transparent process. BMC Res Notes 16, 329 (2023). https://doi.org/10.1186/s13104-023-06597-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13104-023-06597-4

Keywords