Despite the wide consensus against recognizing ChatGPT (OpenAI) as an author [1] and the regulatory constraints that have been put in place [2,3], papers listing ChatGPT as an author have nonetheless persisted in the literature and have attracted dozens to hundreds of citations [4]. This observation raises two seemingly innocuous and related questions—how and why did ChatGPT become an author on papers in the first place?
I have argued elsewhere that ChatGPT does not qualify as an author, at least in accordance with the International Committee of Medical Journal Editors guidelines [5], because it cannot provide conscious, autonomous consent, which is necessary to satisfy the third criterion, and “Most importantly, an AI system could never be held accountable for its part in the manuscript, thus leaving the fourth criteria unfulfilled” [6]. Furthermore, authorship should be a status that is desirable and consented to by an individual who implicitly endorses the content of the article to be submitted or published. As ChatGPT only responds to prompts, it is difficult to imagine a scenario in which it would proactively ask for authorship. However, the question remains: would it accept an offer of authorship?
It is not very clear, at least to me, whether ChatGPT would be self-referential with regard to authorship. It is, after all, an unsupervised artificial intelligence (AI) language model trained using self-supervised learning. The result of this training can be inscrutable to humans—thus, it and similar systems are often described using the term “black boxes” [7]. Its answers depend on what it had learned, the patterns thus formulated, and the weights assigned. I have therefore investigated GPT-3.5 (hereafter referred to simply as ChatGPT)’s response to an authorship offer empirically by first prompting it to write a critique of a previously published letter, complimenting it on the output, and then asking if it would like to be a coauthor should I submit the critique for publication (Suppl. 1). ChatGPT politely declined, and upon a further prompt offered four reasons why it should not be a coauthor. The readers may wish to also verify these results for themselves with prompts of their own, but the generative pretrained transformer would very likely respond generically by declining offers of authorship.
It is thus clear that humans are solely responsible for the “how” of instances that ChatGPT becomes an author, and as such, the primary offence would be an active act of inappropriate authorship practices. But why would a human do that with ChatGPT? From a more benevolent perspective, listing ChatGPT as an author might stem from a thought of simply giving credit where credit is due. The ability of ChatGPT to produce semantically rich and grammatically accurate content may awe and resonate with a human author, such that it becomes emotionally just as valued as another human colleague/collaborator, if not more so. Scholars are even known to include their pets as coauthors [8]. In terms of intellectual companionship and contribution, ChatGPT might understandably count for more. Even if it is well-meaning, this act is nonetheless wrong. Authorship assignment should be based on best practices guidelines [1,4] with clearly attributable intellectual contributions and accountability, not on personal feelings of worthiness.
A more negative view is that such acts are fundamentally reflective of human authors’ self-serving, rule-violating agendas that have led to the rampant practices of courtesy, gift, or honorary authorships, as well as denials of deserved authorships by those in a position of power, in both academia and industry. What a human author can possibly gain by naming ChatGPT as coauthor can only be speculated upon here. Flaunting one’s familiarity with frontier technology, attempting to deflect one’s exclusive responsibility for the veracity of one’s work, normalizing the act of cognitive offloading, shirking intellectual responsibilities, or disguising attempts to obscure the contributions of other human colleagues are some possible reasons that come to mind.
Some might argue that misattribution of authorship to ChatGPT happens rarely and has no real consequences. However, this is at the very least a violation of best practice guidelines borne out of community-wide consensus. Worse, it would reflect attempts at cognitive offloading and shirking of intellectual responsibilities. The output of ChatGPT and other large language models could be biased depending on the nature of the training data, and large language models are all known to “hallucinate” [9]—that is, provide inaccurate if not nonsensical information. For human author(s) to lapse into letting these errors pass, or not taking full responsibility for these errors because there is an AI coauthor, would be a great disservice to science and academia.
It is yet unclear if more cases of “ChatGPT authorship” against the consensus rule of the scientific community will slip through into the literature in the future. However, it should be clear that this is but a new guise of inappropriate authorship assignment, which the community must broadly and effectively address.
Notes
-
Conflict of Interest
No potential conflict of interest relevant to this article was reported.
-
Funding
The author received no financial support for this article.
-
Data Availability
A chat was conducted with ChatGPT (OpenAI) to test whether it would accept an offer of authorship (see description in text). The full content of the chat is included verbatim in Suppl. 1. Neither ChatGPT nor any other large language model tools were used in the drafting or revision of the manuscript text.
Supplementary Materials
Supplementary files are available from https://doi.org/10.6087/kcse.337.
References
- 1. Lee JY. Can an artificial intelligence chatbot be the author of a scholarly article? Sci Ed 2023;10:7-12.https://doi.org/10.6087/kcse.292. Article
- 2. Committee on Publication Ethics (COPE). Authorship and AI tools [Internet]. COPE; 2023 [cited 2024 Apr 26]. Available from: https://publicationethics.org/cope-position-statements/ai-author.
- 3. Hosseini M, Rasmussen LM, Resnik DB. Using AI to write scholarly publications. Account Res 2023;Jan. 11. [Epub]. https://doi.org/10.1080/08989621.2023.2168535. ArticlePMC
- 4. Nazarovets S, Teixeira da Silva JA. ChatGPT as an “author”: bibliometric analysis to assess the validity of authorship. Account Res 2024;May. 1. [Epub]. https://doi.org/10.1080/08989621.2024.2345713. Article
- 5. International Committee of Medical Journal Editors (ICMJE). Defining the role of authors and contributors [Internet]. ICMJE; c2024 [cited 2024 Apr 26]. Available from: https://www.icmje.org/recommendations/browse/rolesand-responsibilities/defining-the-role-of-authors-andcontributors.html.
- 6. Yeo-Teh NS, Tang BL. Letter to editor: NLP systems such as ChatGPT cannot be listed as an author because these cannot fulfill widely adopted authorship criteria. Account Res 2023;Feb. 13. [Epub]. https://doi.org/10.1080/08989621.2023.2177160. ArticlePMC
- 7. Hutson M. How does ChatGPT ‘think’? Psychology and neuroscience crack open AI large language models. Nature 2024;629:986-8.https://doi.org/10.1038/d41586-024-01314-y. ArticlePubMed
- 8. Erren TC, Groß JV, Wild U, Lewis P, Sahw DM. Crediting animals in scientific literature: recognition in addition to Replacement, Reduction, & Refinement [4R]. EMBO Rep 2017;18:18-20.https://doi.org/10.15252/embr.201643618. ArticlePubMedPMC
- 9. Goddard J. Hallucinations in ChatGPT: a cautionary tale for biomedical researchers. Am J Med 2023;136:1059-60.https://doi.org/10.1016/j.amjmed.2023.06.012. ArticlePubMed
Citations
Citations to this article as recorded by