Skip Navigation
Skip to contents

Science Editing : Science Editing

OPEN ACCESS
SEARCH
Search

Articles

Page Path
HOME > Sci Ed > Forthcoming articles > Article
Correspondence
Will widespread use of artificial intelligence tools in manuscript writing mark the end of human scholarship as we know it?
Bor Luen Tangorcid

DOI: https://doi.org/10.6087/kcse.366
Published online: April 10, 2025

Department of Biochemistry, Yong Loo Lin School of Medicine, National University Health System, National University of Singapore, Singapore

Correspondence to Bor Luen Tang bchtbl@nus.edu.sg
• Received: February 23, 2025   • Accepted: March 10, 2025

Copyright © 2025 Korean Council of Science Editors

This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

  • 231 Views
  • 10 Download
The launch of OpenAI’s Deep Research, notable for its ability to “synthesize information from dozens or hundreds of websites into a cited report” [1], has sparked both excitement and wonder in the scholarly community. In describing his initial experiences with Deep Research, Maynard [2] questions whether it will “signal the end of human-only scholarship.” Haman and Školník [3], reversing their previous negative views of ChatGPT, now strongly recommend using ChatGPT’s Deep Research. Impressed by Deep Research’s performance compared to the older version of ChatGPT, they asked: “Are we witnessing not just a shift, but the end of scholarship as we know it?” Indeed, artificial intelligence (AI) chatbots based on large language models (LLMs) have profoundly impacted all fields of scholarly pursuit and education, prompting many of us to ask similar questions.
Two broad concerns, among others, are raised. Firstly, will we eventually become part of a new knowledge enterprise in which scholarly works and the generation of knowledge are largely driven by generative AI with a significantly diminished human component, similar to a modern factory whose production line is entirely operated by automata? Secondly, if everyone uses LLM-based chatbots to write papers, how will we attribute scholarship and merit to individual scholars? Will those who do not use AI, for whatever reason, be distinctly disadvantaged or simply rendered obsolete? Such concerns may be more pressing in certain fields, particularly in the sciences where data collection and analysis are intensive, than in others.
Some perspectives from the philosophy and science of the mind and cognition may help alleviate these concerns. The enhancement of scholarship through cognitive artifacts has been a recurring theme in the advancement of human knowledge. In this regard, the extended mind thesis by Clark and Chalmers [4] proposes that the mind does not reside solely in the brain (or even the body) but extends into the physical environment. According to Clark [5], extended cognition is “a process centered in and managed by the biological brain” that incorporates external loops into the environment as essential functional components. These loops create a network of problem-solving activities involving the human brain, body, and inanimate elements of the physical world. As such, tools like pencil and paper that enhance calculation speed and capacity, followed by the abacus, electronic calculators, and now sophisticated calculator apps on laptops and cell phones, represent successive extensions of cognition. Similarly, successive advancements in word processors, grammar checkers, reference managers, and now LLMs have enhanced scholarly writing. Although LLMs might appear to be a quantum leap in capability, they remain part of human cognitive extension.
LLMs also differ from human cognition in at least two fundamental ways. First, as some argue, they are not embodied [6], at least not in a biological sense. Embodiment endows organisms with sophistication and complexity through evolutionary and developmental capacities, as well as through intricate processes of integrative learning and memory that current machine learning methods cannot emulate. Second, while human cognition primarily relies on theory-based causal reasoning, LLMs operate through information processing and data-driven predictions [7]. This reliance imposes significant limitations on originality and innovation, as it is constrained by the nature and availability of training data. Given that the human mind has successfully integrated previous tools into its knowledge acquisition process, there is no reason to expect that LLM-driven AI chatbots will lead to a fundamentally different outcome.
Therefore, when generating scholarly work, LLMs should, at best, serve as helpful aids rather than preponderantly contribute to the work. They can function as upgraded bibliographic collectors, integrative and rapid analytical tools, refined language polishers, or even meticulous and efficient content organizers. However, true scholarship is not merely about these functions. It involves the interrogation of theories and hypotheses, as well as the creation of new ideas and knowledge. It requires the intellectually challenging and painstaking processes of falsification, critique, and synthesis—not simply the generation of (albeit accurate) laundry lists. Thus, true scholarship can only be centered and managed by the biological brain with its array of cognitive extensions, including LLMs, but not the other way around.
Thus far, LLM-based chatbots remain examples of narrow AI, operating through human prompt-dependent probabilistic predictions of patterns and tokens. While some in the AI industry speculate that we are nearing the creation of artificial general intelligence—advanced AI with human-like cognition and agency—others argue that natural or biological intelligence and AI are fundamentally different, limiting the possibility of human-like advanced AI [8]. Regardless, the ease of using AI has led to a growing number of manuscripts that are primarily crafted by LLM-driven chatbots with minimal intellectual input from human authors. This development poses a significant challenge. In this new era of generative AI, it is prudent for all stakeholders—editors, reviewers, and readers—to differentiate between LLM-enabled assistance and genuine scholarship, thereby ensuring that appropriate merit is attributed to the authors.
To address these issues, one approach is to implement stricter controls on the use of AI in academic writing, with a stronger emphasis on ethical use and accountability. In other words, beyond a mandatory declaration of AI use, a detailed log or report outlining how the AI is prompted to generate content should be required. Enforcing and monitoring such controls could also trigger an escalating arms race between tools for AI detection and methods for erasing AI traces. Alternatively, we might develop methods for assessing genuine scholarship that extend beyond prompt engineering. The effectiveness of such methods would naturally depend on the specifics of each field of study. Scholars within a given field should collaboratively define what constitutes true creativity, novelty, and innovation, as opposed to a mundane rehash or rumination of known facts.

Conflict of Interest

No potential conflict of interest relevant to this article was reported.

Funding

The author received no financial support for this article.

Data Availability

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

The author did not provide any supplementary materials for this article.

Figure & Data

References

    Citations

    Citations to this article as recorded by  

      Related articles
      Will widespread use of artificial intelligence tools in manuscript writing mark the end of human scholarship as we know it?
      Will widespread use of artificial intelligence tools in manuscript writing mark the end of human scholarship as we know it?

      Science Editing : Science Editing
      TOP