Skip Navigation
Skip to contents

Science Editing : Science Editing

OPEN ACCESS
SEARCH
Search

Search

Page Path
HOME > Search
24 "Research"
Filter
Filter
Article category
Keywords
Publication year
Authors
Funded articles
Original Article
Adherence to the International Committee of Medical Journal Editors–recommended gender equity policy in nursing journals listed in MEDLINE or PubMed Central: a descriptive study
Eun Jeong Ko, Geum Hee Jeong
Sci Ed. 2024;11(1):33-37.   Published online February 20, 2024
DOI: https://doi.org/10.6087/kcse.328
  • 1,109 View
  • 39 Download
AbstractAbstract PDF
Purpose: The evolving landscape of nursing research emphasizes inclusive representation. The International Committee of Medical Journal Editors (ICMJE) has established guidelines to ensure the fair representation of various demographic variables, including age, sex, and ethnicity. This study aimed to evaluate the adherence of nursing journals indexed in MEDLINE or PubMed Central to the ICMJE’s directives on gender equity, given that journals indexed in MEDLINE and PubMed Central typically adhere to the ICMJE’s guidelines.
Methods
A descriptive literature review methodology was employed to analyze 160 nursing journals listed in two databases as of July 28, 2023. The website of each journal was searched, and the most recent original article from each was selected. These articles were then evaluated for their alignment with the ICMJE guidelines on gender equity. Descriptive statistics were applied to categorize and enumerate the cases.
Results
Of the articles reviewed from 160 journals, 115 dealt with human populations. Of these, 93 required a description of gender equity. Within this subset, 83 articles distinguished between the genders of human subjects. Gender-based interpretations were provided in 15 articles, while another 68 did not offer an interpretation of differences by gender. Among the 10 articles that did not delineate gender, only two provided a rationale for this omission.
Conclusion
Among recent articles published in the nursing journals indexed in MEDLINE and PubMed Central, only 16.1% presented clear gender analyses. These findings highlight the need for editors to strengthen their dedication to gender equity within their editorial policies.
Review
Influence of artificial intelligence and chatbots on research integrity and publication ethics
Payam Hosseinzadeh Kasani, Kee Hyun Cho, Jae-Won Jang, Cheol-Heui Yun
Sci Ed. 2024;11(1):12-25.   Published online January 25, 2024
DOI: https://doi.org/10.6087/kcse.323
  • 1,345 View
  • 79 Download
AbstractAbstract PDF
Artificial intelligence (AI)-powered chatbots are rapidly supplanting human-derived scholarly work in the fast-paced digital age. This necessitates a re-evaluation of our traditional research and publication ethics, which is the focus of this article. We explore the ethical issues that arise when AI chatbots are employed in research and publication. We critically examine the attribution of academic work, strategies for preventing plagiarism, the trustworthiness of AI-generated content, and the integration of empathy into these systems. Current approaches to ethical education, in our opinion, fall short of appropriately addressing these problems. We propose comprehensive initiatives to tackle these emerging ethical concerns. This review also examines the limitations of current chatbot detectors, underscoring the necessity for more sophisticated technology to safeguard academic integrity. The incorporation of AI and chatbots into the research environment is set to transform the way we approach scholarly inquiries. However, our study emphasizes the importance of employing these tools ethically within research and academia. As we move forward, it is of the utmost importance to concentrate on creating robust, flexible strategies and establishing comprehensive regulations that effectively align these potential technological developments with stringent ethical standards. We believe that this is an essential measure to ensure that the advancement of AI chatbots significantly augments the value of scholarly research activities, including publications, rather than introducing potential ethical quandaries.
Training Materials
Get Full Text Research (GetFTR): can it be a good tool for researchers?
Kwangil Oh
Sci Ed. 2023;10(2):186-189.   Published online August 17, 2023
DOI: https://doi.org/10.6087/kcse.311
  • 1,486 View
  • 196 Download
AbstractAbstract PDF
Technological advances have been an integral part of discussions related to journal publishing in recent years. This article presents Get Full Text Research (GetFTR), a discovery solution launched by five major publishers: the American Chemical Society, Elsevier, Springer Nature, Taylor & Francis Group, and Wiley. These founding publishers announced the development of this new solution in 2019, and its pilot service was launched just 4 months later. The GetFTR solutions streamlines access to not only open access resources but also to subscription-based resources. The publishers have assured that this solution will be beneficial for all relevant stakeholders involved in the journal publication process, including publishers, researchers, integrators, and libraries. They highlighted that researchers will have the ability to access published articles with minimal effort or steps, benefitting from existing (single sign-on) access technologies, ideally accessing the article PDF with a single click. While GetFTR is free for integrators and researchers, publishers are required to pay an annual subscription fee. To lower the barrier for participation, GetFTR supports smaller publishers by offering them a discount based on the number of digital object identifiers (DOIs), as recorded in Crossref data. While this project appears promising, some initial concerns were raised, particularly regarding user data control, which the project has responded to by more closely engaging the librarian community and by providing further information on how GetFTR supports user privacy.
The research nexus and Principles of Open Scholarly Infrastructure (POSI): sharing our goal of an open, connected ecosystem of research objects
Rachael Lammey
Sci Ed. 2023;10(2):190-194.   Published online July 31, 2023
DOI: https://doi.org/10.6087/kcse.315
  • 1,347 View
  • 203 Download
AbstractAbstract PDF
As a community, it is impossible to ignore the fact that sharing research and information related to research is a much broader proposition than sharing an article, book, or conference paper. In supporting an evolving scholarly record, making connections between research organizations, contributors, actions, and objects helps give a more complete picture of the scholarly record, which open infrastructure organizations like Crossref call the research nexus. Crossref is working to support this evolution and is thinking about the metadata it collects via its members and that it supplements and curates, to make it broader than the rigid structures traditionally provided by content types. Furthermore, because of Crossref’s commitment to the Principles of Open Scholarly Infrastructure (POSI), this network of information will be global and openly available for anyone in the community to access and reuse. The present article describes this vision in more detail, including why it is increasingly important to support the links between research and elements that contribute or are related to that research; how Crossref, its members, and the wider community can support it; and the work and planning Crossref is doing to make it easier to achieve this.
Original Article
Research information service development plan based on an analysis of the digital scholarship lifecycle experience of humanities scholars in Korea: a qualitative study
Jungyeoun Lee, Eunkyung Chung
Sci Ed. 2023;10(2):127-134.   Published online July 28, 2023
DOI: https://doi.org/10.6087/kcse.309
  • 1,987 View
  • 263 Download
AbstractAbstract PDFSupplementary Material
Purpose: Given the impact of information technologies, the research environment for humanities scholars is transforming into digital scholarship. This study presents a foundational investigation for developing digital scholarship (DS) research support services. It also proposes a plan for sustainable information services through examining the current status of DS in Korea, as well as accessing, processing, implementing, disseminating, and preserving interdisciplinary digital data.
Methods
Qualitative interview data were collected from September 7 to 11, 2020. The interviews were conducted with scholars at the research director level who had participated in the DS research project in Korea. Data were coded using Nvivo 14, and cross-analysis was performed among researchers to extract central nodes and derive service elements.
Results
This study divided DS into five stages: research plan, research implementation, publishing results, dissemination of research results, and preservation and reuse. This paper also presents the library DS information services required for each stage. The characteristic features of the DS research cycle are the importance of collaboration, converting analog resources to data, data modeling and technical support for the analysis process, humanities data curation, drafting a research data management plan, and international collaboration.
Conclusion
Libraries should develop services based on open science and data management plan policies. Examples include a DS project liaison service, data management, datafication, digital publication repositories, a digital preservation plan, and a web archiving service. Data sharing for humanities research resources made possible through international collaboration will contribute to the expansion of new digital culture research.
Review
How to review and assess a systematic review and meta-analysis article
Seung-Kwon Myung
Sci Ed. 2023;10(2):119-126.   Published online April 28, 2023
DOI: https://doi.org/10.6087/kcse.306
  • 2,780 View
  • 315 Download
AbstractAbstract PDF
Systematic reviews and meta-analyses have become central in many research fields, particularly medicine. They offer the highest level of evidence in evidence-based medicine and support the development and revision of clinical practice guidelines, which offer recommendations for clinicians caring for patients with specific diseases and conditions. This review summarizes the concepts of systematic reviews and meta-analyses and provides guidance on reviewing and assessing such papers. A systematic review refers to a review of a research question that uses explicit and systematic methods to identify, select, and critically appraise relevant research. In contrast, a meta-analysis is a quantitative statistical analysis that combines individual results on the same research question to estimate the common or mean effect. Conducting a meta-analysis involves defining a research topic, selecting a study design, searching literature in electronic databases, selecting relevant studies, and conducting the analysis. One can assess the findings of a meta-analysis by interpreting a forest plot and a funnel plot and by examining heterogeneity. When reviewing systematic reviews and meta-analyses, several essential points must be considered, including the originality and significance of the work, the comprehensiveness of the database search, the selection of studies based on inclusion and exclusion criteria, subgroup analyses by various factors, and the interpretation of the results based on the levels of evidence. This review will provide readers with helpful guidance to help them read, understand, and evaluate these articles.
Training Material
What to tell and never tell a reviewer
Jean Iwaz
Sci Ed. 2023;10(2):181-185.   Published online April 28, 2023
DOI: https://doi.org/10.6087/kcse.305
  • 3,625 View
  • 222 Download
AbstractAbstract PDF
The specialized literature abounds in recommendations about the most desirable technical ways of answering reviewers’ comments on a submitted manuscript. However, not all publications mention authors’ and/or reviewers’ feelings or reactions about what they may read or write in their respective reports, and even fewer publications tackle openly what may or may not be said in a set of answers to a reviewer’s comments. In answering reviewers’ comments, authors are often attentive to the technical or rational aspects of the task but might forget some of its relational aspects. In their answers, authors are expected to make every effort to abide by reviewers’ suggestions, including discussing major criticisms, editing the illustrations, or implementing minor corrections; abstain from questioning a reviewer’s competence or willingness to write a good review, including full and attentive reading and drafting useful comments; clearly separate their answers to each reviewer; avoid skipping, merging, or reordering reviewers’ comments; and, finally, specify the changes made. Authors are advised to call on facts, logic, and some diplomacy, but never on artifice, concealment, or flattery. Failing to do so erodes the trust between authors and reviewers, whereas integrity is expected and highly valued. The guiding principle should always be honesty.
Original Articles
Data sharing attitudes and practices of researchers in Korean government research institutes: a survey-based descriptive study
Jihyun Kim, Hyekyong Hwang, Youngim Jung, Sung-Nam Cho, Tae-Sul Seo
Sci Ed. 2023;10(1):71-77.   Published online February 16, 2023
DOI: https://doi.org/10.6087/kcse.299
  • 1,915 View
  • 233 Download
  • 3 Web of Science
  • 3 Crossref
AbstractAbstract PDF
Purpose: This study explored to what extent and how researchers in five Korean government research institutes that implement research data management practices share their research data and investigated the challenges they perceive regarding data sharing.
Methods
The study collected survey data from 224 respondents by posting a link to a SurveyMonkey questionnaire on the homepage of each of the five research institutes from June 15 to 29, 2022. Descriptive statistical analyses were conducted.
Results
Among 148 respondents with data sharing experience, the majority had shared some or most of their data. Restricted data sharing within a project was more common than sharing data with outside researchers on request or making data publicly available. Sharing data directly with researchers who asked was the most common method of data sharing, while sharing data via institutional repositories was the second most common method. The most frequently cited factors impeding data sharing included the time and effort required to organize data, concerns about copyright or ownership of data, lack of recognition and reward, and concerns about data containing sensitive information.
Conclusion
Researchers need ongoing training and support on making decisions about access to data, which are nuanced rather than binary. Research institutes’ commitment to developing and maintaining institutional data repositories is also important to facilitate data sharing. To address barriers to data sharing, it is necessary to implement research data management services that help reduce effort and mitigate concerns about legal issues. Possible incentives for researchers who share data should also continue to be explored.

Citations

Citations to this article as recorded by  
  • Korean scholarly journal editors’ and publishers’ attitudes towards journal data sharing policies and data papers (2023): a survey-based descriptive study
    Hyun Jun Yi, Youngim Jung, Hyekyong Hwang, Sung-Nam Cho
    Science Editing.2023; 10(2): 141.     CrossRef
  • Data sharing and data governance in sub-Saharan Africa: Perspectives from researchers and scientists engaged in data-intensive research
    Siti M. Kabanda, Nezerith Cengiz, Kanshukan Rajaratnam, Bruce W. Watson, Qunita Brown, Tonya M. Esterhuizen, Keymanthri Moodley
    South African Journal of Science.2023;[Epub]     CrossRef
  • Identifying key factors and actions: Initial steps in the Open Science Policy Design and Implementation Process
    Hanna Shmagun, Jangsup Shim, Jaesoo Kim, Kwang-Nam Choi, Charles Oppenheim
    Journal of Information Science.2023;[Epub]     CrossRef
Current status and demand for educational activities on publication ethics by academic organizations in Korea: a descriptive study
Yera Hur, Cheol-Heui Yun
Sci Ed. 2023;10(1):64-70.   Published online February 16, 2023
DOI: https://doi.org/10.6087/kcse.298
  • 1,631 View
  • 220 Download
  • 2 Web of Science
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Purpose: This study aimed to examine the following overarching issues: the current status of research and publication ethics training conducted in Korean academic organizations and what needs to be done to reinforce research and publication ethics training.
Methods
A survey with 12 items was examined in a pilot survey, followed by a main survey that was distributed to 2,487 academic organizations. A second survey, which contained six additional questions, was dispatched to the same subjects. The results of each survey were analyzed by descriptive statistical analysis, content analysis, and comparative analysis.
Results
More than half of the academic organizations provided research and publication ethics training programs, with humanities and social sciences organizations giving more training than the others (χ2=11.190, df=2, P=0.004). The results showed that research and publication ethics training was held mostly once and less than an hour per year, mainly in a lecture format. No significant difference was found in the training content among academic fields. The academic organizations preferred case-based discussion training methods and wanted expert instructors who could give tailored training with examples.
Conclusion
A systematic training program that can develop ethics instructors tailored to specific academic fields and financial support from academic organizations can help scholarly editors resolve the apparent gap between the real and the ideal in ethics training, and ultimately to achieve the competency needed to train their own experts.

Citations

Citations to this article as recorded by  
  • Influence of artificial intelligence and chatbots on research integrity and publication ethics
    Payam Hosseinzadeh Kasani, Kee Hyun Cho, Jae-Won Jang, Cheol-Heui Yun
    Science Editing.2024; 11(1): 12.     CrossRef
Reviews
Can an artificial intelligence chatbot be the author of a scholarly article?
Ju Yoen Lee
Sci Ed. 2023;10(1):7-12.   Published online February 16, 2023
DOI: https://doi.org/10.6087/kcse.292
  • 5,037 View
  • 430 Download
  • 4 Web of Science
  • 8 Crossref
AbstractAbstract PDF
At the end of 2022, the appearance of ChatGPT, an artificial intelligence (AI) chatbot with amazing writing ability, caused a great sensation in academia. The chatbot turned out to be very capable, but also capable of deception, and the news broke that several researchers had listed the chatbot (including its earlier version) as co-authors of their academic papers. In response, Nature and Science expressed their position that this chatbot cannot be listed as an author in the papers they publish. Since an AI chatbot is not a human being, in the current legal system, the text automatically generated by an AI chatbot cannot be a copyrighted work; thus, an AI chatbot cannot be an author of a copyrighted work. Current AI chatbots such as ChatGPT are much more advanced than search engines in that they produce original text, but they still remain at the level of a search engine in that they cannot take responsibility for their writing. For this reason, they also cannot be authors from the perspective of research ethics.

Citations

Citations to this article as recorded by  
  • The ethics of ChatGPT – Exploring the ethical issues of an emerging technology
    Bernd Carsten Stahl, Damian Eke
    International Journal of Information Management.2024; 74: 102700.     CrossRef
  • ChatGPT in healthcare: A taxonomy and systematic review
    Jianning Li, Amin Dada, Behrus Puladi, Jens Kleesiek, Jan Egger
    Computer Methods and Programs in Biomedicine.2024; 245: 108013.     CrossRef
  • “Brave New World” or not?: A mixed-methods study of the relationship between second language writing learners’ perceptions of ChatGPT, behaviors of using ChatGPT, and writing proficiency
    Li Dong
    Current Psychology.2024;[Epub]     CrossRef
  • Emergence of the metaverse and ChatGPT in journal publishing after the COVID-19 pandemic
    Sun Huh
    Science Editing.2023; 10(1): 1.     CrossRef
  • ChatGPT: Systematic Review, Applications, and Agenda for Multidisciplinary Research
    Harjit Singh, Avneet Singh
    Journal of Chinese Economic and Business Studies.2023; 21(2): 193.     CrossRef
  • ChatGPT: More Than a “Weapon of Mass Deception” Ethical Challenges and Responses from the Human-Centered Artificial Intelligence (HCAI) Perspective
    Alejo José G. Sison, Marco Tulio Daza, Roberto Gozalo-Brizuela, Eduardo C. Garrido-Merchán
    International Journal of Human–Computer Interaction.2023; : 1.     CrossRef
  • Universal skepticism of ChatGPT: a review of early literature on chat generative pre-trained transformer
    Casey Watters, Michal K. Lemanski
    Frontiers in Big Data.2023;[Epub]     CrossRef
  • ChatGPT, yabancı dil öğrencisinin güvenilir yapay zekâ sohbet arkadaşı mıdır?
    Şule ÇINAR YAĞCI, Tugba AYDIN YILDIZ
    RumeliDE Dil ve Edebiyat Araştırmaları Dergisi.2023; (37): 1315.     CrossRef
An algorithm for the selection of reporting guidelines
Soo Young Kim
Sci Ed. 2023;10(1):13-18.   Published online November 14, 2022
DOI: https://doi.org/10.6087/kcse.287
  • 2,283 View
  • 271 Download
  • 2 Web of Science
  • 2 Crossref
AbstractAbstract PDF
A reporting guideline can be defined as “a checklist, flow diagram, or structured text to guide authors in reporting a specific type of research, developed using explicit methodology.” A reporting guideline outlines the bare minimum of information that must be presented in a research report in order to provide a transparent and understandable explanation of what was done and what was discovered. Many reporting guidelines have been developed, and it has become important to select the most appropriate reporting guideline for a manuscript. Herein, I propose an algorithm for the selection of reporting guidelines. This algorithm was developed based on the research design classification system and the content presented for major reporting guidelines through the EQUATOR (Enhancing the Quality and Transparency of Health Research) network. This algorithm asks 10 questions: “is it a protocol,” “is it secondary research,” “is it an in vivo animal study,” “is it qualitative research,” “is it economic evaluation research,” “is it a diagnostic accuracy study or prognostic research,” “is it quality improvement research,” “is it a non-comparative study,” “is it a comparative study between groups,” and “is it an experimental study?” According to the responses, 16 appropriate reporting guidelines are suggested. Using this algorithm will make it possible to select reporting guidelines rationally and transparently.

Citations

Citations to this article as recorded by  
  • Journal of Educational Evaluation for Health Professions received the Journal Impact Factor, 4.4 for the first time on June 28, 2023
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2023; 20: 21.     CrossRef
  • Why do editors of local nursing society journals strive to have their journals included in MEDLINE? A case study of the Korean Journal of Women Health Nursing
    Sun Huh
    Korean Journal of Women Health Nursing.2023; 29(3): 147.     CrossRef
Types, limitations, and possible alternatives of peer review based on the literature and surgeons’ opinions via Twitter: a narrative review
Sameh Hany Emile, Hytham K. S. Hamid, Semra Demirli Atici, Doga Nur Kosker, Mario Virgilio Papa, Hossam Elfeki, Chee Yang Tan, Alaa El-Hussuna, Steven D. Wexner
Sci Ed. 2022;9(1):3-14.   Published online February 20, 2022
DOI: https://doi.org/10.6087/kcse.257
  • 5,659 View
  • 308 Download
AbstractAbstract PDF
This review aimed to illustrate the types, limitations, and possible alternatives of peer review (PR) based on a literature review together with the opinions of a social media audience via Twitter. This study was conducted via the #OpenSourceResearch collaborative platform and combined a comprehensive literature search on the current PR system with the opinions of a social media audience of surgeons who are actively engaged in the current PR system. Six independent researchers conducted a literature search of electronic databases in addition to Google Scholar. Electronic polls were organized via Twitter to assess surgeons’ opinions on the current PR system and potential alternative approaches. PR can be classified into single-blind, double-blind, triple-blind, and open PR. Newer PR systems include interactive platforms, prepublication and postpublication commenting or review, transparent review, and collaborative review. The main limitations of the current PR system are its allegedly time-consuming nature and inconsistent, biased, and non-transparent results. Suggestions to improve the PR process include employing an interactive, double-blind PR system, using artificial intelligence to recruit reviewers, providing incentives for reviewers, and using PR templates. The above results offer several concepts for possible alternative approaches and modifications to this critically important process.
Training Material
Participation Reports help Crossref members drive research further
Anna Tolwinska
Sci Ed. 2021;8(2):180-185.   Published online August 20, 2021
DOI: https://doi.org/10.6087/kcse.253
  • 3,748 View
  • 107 Download
  • 1 Web of Science
  • 1 Crossref
AbstractAbstract PDF
This article aims to explain the key metadata elements listed in Participation Reports, why it’s important to check them regularly, and how Crossref members can improve their scores. Crossref members register a lot of metadata in Crossref. That metadata is machine-readable, standardized, and then shared across discovery services and author tools. This is important because richer metadata makes content more discoverable and useful to the scholarly community. It’s not always easy to know what metadata Crossref members register in Crossref. This is why Crossref created an easy-to-use tool called Participation Reports to show editors, and researchers the key metadata elements Crossref members register to make their content more useful. The key metadata elements include references and whether they are set to open, ORCID iDs, funding information, Crossmark metadata, licenses, full-text URLs for text-mining, and Similarity Check indexing, as well as abstracts. ROR IDs (Research Organization Registry Identifiers), that identify institutions will be added in the future. This data was always available through the Crossref ’s REST API (Representational State Transfer Application Programming Interface) but is now visualized in Participation Reports. To improve scores, editors should encourage authors to submit ORCIDs in their manuscripts and publishers should register as much metadata as possible to help drive research further.

Citations

Citations to this article as recorded by  
  • Reflections on 4 years in the role of a Crossref ambassador in Korea
    Jae Hwa Chang
    Science Editing.2022; 9(1): 69.     CrossRef
Case Study
Korean court cases regarding research and publication ethics from 2009 to 2020
Ju Yoen Lee
Sci Ed. 2021;8(1):98-103.   Published online February 20, 2021
DOI: https://doi.org/10.6087/kcse.236
  • 4,616 View
  • 137 Download
  • 1 Crossref
AbstractAbstract PDF
Research and publication misconduct may occur in various forms, including author misrepresentation, plagiarism, and data fabrication. Research and publication ethics are essentially not legal duties, but ethical obligations. In reality, however, legal disputes arise over whether research and publication ethics have been violated. Thus, in many cases, misconduct in research and publication is determined in the courts. This article presents noteworthy legal cases in Korea regarding research and publication ethics to help editors and authors prevent ethical misconduct. Legal cases from 2009 to 2020 were collected from the database of the Supreme Court of Korea in December 2020. These court cases represent three case types: 1) civil cases, such as affirmation of nullity of dismissal and damages; 2) criminal cases, such as fraud, interference with business, and violations of copyright law; and 3) administrative cases related to disciplinary measures against professors affiliated with a university. These cases show that although research and publication ethics are ethical norms that are autonomously established by the relevant academic societies, they become a criterion for case resolution in legal disputes where research and publication misconduct is at issue.

Citations

Citations to this article as recorded by  
  • Congratulations on Child Health Nursing Research becoming a PubMed Central journal and reflections on its significance
    Sun Huh
    Child Health Nursing Research.2022; 28(1): 1.     CrossRef
Review
Ethical challenges regarding artificial intelligence in medicine from the perspective of scientific editing and peer review
Seong Ho Park, Young-Hak Kim, Jun Young Lee, Soyoung Yoo, Chong Jai Kim
Sci Ed. 2019;6(2):91-98.   Published online June 19, 2019
DOI: https://doi.org/10.6087/kcse.164
  • 14,773 View
  • 407 Download
  • 15 Web of Science
  • 17 Crossref
AbstractAbstract PDF
This review article aims to highlight several areas in research studies on artificial intelligence (AI) in medicine that currently require additional transparency and explain why additional transparency is needed. Transparency regarding training data, test data and results, interpretation of study results, and the sharing of algorithms and data are major areas for guaranteeing ethical standards in AI research. For transparency in training data, clarifying the biases and errors in training data and the AI algorithms based on these training data prior to their implementation is critical. Furthermore, biases about institutions and socioeconomic groups should be considered. For transparency in test data and test results, authors should state if the test data were collected externally or internally and prospectively or retrospectively at first. It is necessary to distinguish whether datasets were convenience samples consisting of some positive and some negative cases or clinical cohorts. When datasets from multiple institutions were used, authors should report results from each individual institution. Full publication of the results of AI research is also important. For transparency in interpreting study results, authors should interpret the results explicitly and avoid over-interpretation. For transparency by sharing algorithms and data, sharing is required for replication and reproducibility of the research by other researchers. All of the above mentioned high standards regarding transparency of AI research in healthcare should be considered to facilitate the ethical conduct of AI research.

Citations

Citations to this article as recorded by  
  • Towards Integration of Artificial Intelligence into Medical Devices as a Real-Time Recommender System for Personalised Healthcare: State-of-the-Art and Future Prospects
    Talha Iqbal, Mehedi Masud, Bilal Amin, Conor Feely, Mary Faherty, Tim Jones, Michelle Tierney, Atif Shahzad, Patricia Vazquez
    Health Sciences Review.2024; : 100150.     CrossRef
  • The Knowledge of Students at Bursa Faculty of Medicine towards Artificial Intelligence: A Survey Study
    Deniz GÜVEN, Elif Güler KAZANCI, Ayşe ÖREN, Livanur SEVER, Pelin ÜNLÜ
    Journal of Bursa Faculty of Medicine.2024; 2(1): 20.     CrossRef
  • New institutional theory and AI: toward rethinking of artificial intelligence in organizations
    Ihor Rudko, Aysan Bashirpour Bonab, Maria Fedele, Anna Vittoria Formisano
    Journal of Management History.2024;[Epub]     CrossRef
  • Artificial intelligence technology in MR neuroimaging. А radiologist’s perspective
    G. E. Trufanov, A. Yu. Efimtsev
    Russian Journal for Personalized Medicine.2023; 3(1): 6.     CrossRef
  • The minefield of indeterminate thyroid nodules: could artificial intelligence be a suitable diagnostic tool?
    Vincenzo Fiorentino, Cristina Pizzimenti, Mariausilia Franchina, Marina Gloria Micali, Fernanda Russotto, Ludovica Pepe, Gaetano Basilio Militi, Pietro Tralongo, Francesco Pierconti, Antonio Ieni, Maurizio Martini, Giovanni Tuccari, Esther Diana Rossi, Gu
    Diagnostic Histopathology.2023; 29(8): 396.     CrossRef
  • Ethical, legal, and social considerations of AI-based medical decision-support tools: A scoping review
    Anto Čartolovni, Ana Tomičić, Elvira Lazić Mosler
    International Journal of Medical Informatics.2022; 161: 104738.     CrossRef
  • Transparency of Artificial Intelligence in Healthcare: Insights from Professionals in Computing and Healthcare Worldwide
    Jose Bernal, Claudia Mazo
    Applied Sciences.2022; 12(20): 10228.     CrossRef
  • Artificial intelligence in the water domain: Opportunities for responsible use
    Neelke Doorn
    Science of The Total Environment.2021; 755: 142561.     CrossRef
  • Artificial intelligence for ultrasonography: unique opportunities and challenges
    Seong Ho Park
    Ultrasonography.2021; 40(1): 3.     CrossRef
  • Key Principles of Clinical Validation, Device Approval, and Insurance Coverage Decisions of Artificial Intelligence
    Seong Ho Park, Jaesoon Choi, Jeong-Sik Byeon
    Korean Journal of Radiology.2021; 22(3): 442.     CrossRef
  • Is it alright to use artificial intelligence in digital health? A systematic literature review on ethical considerations
    Nicholas RJ Möllmann, Milad Mirbabaie, Stefan Stieglitz
    Health Informatics Journal.2021; 27(4): 146045822110523.     CrossRef
  • Presenting machine learning model information to clinical end users with model facts labels
    Mark P. Sendak, Michael Gao, Nathan Brajer, Suresh Balu
    npj Digital Medicine.2020;[Epub]     CrossRef
  • Artificial intelligence with multi-functional machine learning platform development for better healthcare and precision medicine
    Zeeshan Ahmed, Khalid Mohamed, Saman Zeeshan, XinQi Dong
    Database.2020;[Epub]     CrossRef
  • The ethics of machine learning in medical sciences: Where do we stand today?
    Treena Basu, Sebastian Engel-Wolf, Olaf Menzer
    Indian Journal of Dermatology.2020; 65(5): 358.     CrossRef
  • Key principles of clinical validation, device approval, and insurance coverage decisions of artificial intelligence
    Seong Ho Park, Jaesoon Choi, Jeong-Sik Byeon
    Journal of the Korean Medical Association.2020; 63(11): 696.     CrossRef
  • Reflections as 2020 comes to an end: the editing and educational environment during the COVID-19 pandemic, the power of Scopus and Web of Science in scholarly publishing, journal statistics, and appreciation to reviewers and volunteers
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2020; 17: 44.     CrossRef
  • What should medical students know about artificial intelligence in medicine?
    Seong Ho Park, Kyung-Hyun Do, Sungwon Kim, Joo Hyun Park, Young-Suk Lim
    Journal of Educational Evaluation for Health Professions.2019; 16: 18.     CrossRef

Science Editing : Science Editing