Skip Navigation
Skip to contents

Science Editing : Science Editing

OPEN ACCESS
SEARCH
Search

Articles

Page Path
HOME > Sci Ed > Volume 11(2); 2024 > Article
Correspondence
Did ChatGPT ask or agree to be a (co)author? ChatGPT authorship reflects the wider problem of inappropriate authorship practices
Bor Luen Tangorcid
Science Editing 2024;11(2):93-95.
DOI: https://doi.org/10.6087/kcse.337
Published online: July 4, 2024

Department of Biochemistry, Yong Loo Lin School of Medicine, National University Health System, National University of Singapore, Singapore

Correspondence to Bor Luen Tang bchtbl@nus.edu.sg
• Received: May 21, 2024   • Accepted: June 3, 2024

Copyright © 2024 Korean Council of Science Editors

This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

next
  • 735 Views
  • 46 Download
Despite the wide consensus against recognizing ChatGPT (OpenAI) as an author [1] and the regulatory constraints that have been put in place [2,3], papers listing ChatGPT as an author have nonetheless persisted in the literature and have attracted dozens to hundreds of citations [4]. This observation raises two seemingly innocuous and related questions—how and why did ChatGPT become an author on papers in the first place?
I have argued elsewhere that ChatGPT does not qualify as an author, at least in accordance with the International Committee of Medical Journal Editors guidelines [5], because it cannot provide conscious, autonomous consent, which is necessary to satisfy the third criterion, and “Most importantly, an AI system could never be held accountable for its part in the manuscript, thus leaving the fourth criteria unfulfilled” [6]. Furthermore, authorship should be a status that is desirable and consented to by an individual who implicitly endorses the content of the article to be submitted or published. As ChatGPT only responds to prompts, it is difficult to imagine a scenario in which it would proactively ask for authorship. However, the question remains: would it accept an offer of authorship?
It is not very clear, at least to me, whether ChatGPT would be self-referential with regard to authorship. It is, after all, an unsupervised artificial intelligence (AI) language model trained using self-supervised learning. The result of this training can be inscrutable to humans—thus, it and similar systems are often described using the term “black boxes” [7]. Its answers depend on what it had learned, the patterns thus formulated, and the weights assigned. I have therefore investigated GPT-3.5 (hereafter referred to simply as ChatGPT)’s response to an authorship offer empirically by first prompting it to write a critique of a previously published letter, complimenting it on the output, and then asking if it would like to be a coauthor should I submit the critique for publication (Suppl. 1). ChatGPT politely declined, and upon a further prompt offered four reasons why it should not be a coauthor. The readers may wish to also verify these results for themselves with prompts of their own, but the generative pretrained transformer would very likely respond generically by declining offers of authorship.
It is thus clear that humans are solely responsible for the “how” of instances that ChatGPT becomes an author, and as such, the primary offence would be an active act of inappropriate authorship practices. But why would a human do that with ChatGPT? From a more benevolent perspective, listing ChatGPT as an author might stem from a thought of simply giving credit where credit is due. The ability of ChatGPT to produce semantically rich and grammatically accurate content may awe and resonate with a human author, such that it becomes emotionally just as valued as another human colleague/collaborator, if not more so. Scholars are even known to include their pets as coauthors [8]. In terms of intellectual companionship and contribution, ChatGPT might understandably count for more. Even if it is well-meaning, this act is nonetheless wrong. Authorship assignment should be based on best practices guidelines [1,4] with clearly attributable intellectual contributions and accountability, not on personal feelings of worthiness.
A more negative view is that such acts are fundamentally reflective of human authors’ self-serving, rule-violating agendas that have led to the rampant practices of courtesy, gift, or honorary authorships, as well as denials of deserved authorships by those in a position of power, in both academia and industry. What a human author can possibly gain by naming ChatGPT as coauthor can only be speculated upon here. Flaunting one’s familiarity with frontier technology, attempting to deflect one’s exclusive responsibility for the veracity of one’s work, normalizing the act of cognitive offloading, shirking intellectual responsibilities, or disguising attempts to obscure the contributions of other human colleagues are some possible reasons that come to mind.
Some might argue that misattribution of authorship to ChatGPT happens rarely and has no real consequences. However, this is at the very least a violation of best practice guidelines borne out of community-wide consensus. Worse, it would reflect attempts at cognitive offloading and shirking of intellectual responsibilities. The output of ChatGPT and other large language models could be biased depending on the nature of the training data, and large language models are all known to “hallucinate” [9]—that is, provide inaccurate if not nonsensical information. For human author(s) to lapse into letting these errors pass, or not taking full responsibility for these errors because there is an AI coauthor, would be a great disservice to science and academia.
It is yet unclear if more cases of “ChatGPT authorship” against the consensus rule of the scientific community will slip through into the literature in the future. However, it should be clear that this is but a new guise of inappropriate authorship assignment, which the community must broadly and effectively address.

Conflict of Interest

No potential conflict of interest relevant to this article was reported.

Funding

The author received no financial support for this article.

Data Availability

A chat was conducted with ChatGPT (OpenAI) to test whether it would accept an offer of authorship (see description in text). The full content of the chat is included verbatim in Suppl. 1. Neither ChatGPT nor any other large language model tools were used in the drafting or revision of the manuscript text.

Supplementary files are available from https://doi.org/10.6087/kcse.337.
Suppl. 1. Chat record with ChatGPT (OpenAI), conducted on May 3, 2024, about its authorship.
kcse-337-Supplementary-1.pdf

Figure & Data

References

    Citations

    Citations to this article as recorded by  

      Did ChatGPT ask or agree to be a (co)author? ChatGPT authorship reflects the wider problem of inappropriate authorship practices
      Did ChatGPT ask or agree to be a (co)author? ChatGPT authorship reflects the wider problem of inappropriate authorship practices

      Science Editing : Science Editing
      TOP