| Home | KCSE | Sitemap | Contact Us |  
Science Editing > Epub ahead of print
Kim: An algorithm for the selection of reporting guidelines

Abstract

A reporting guideline can be defined as “a checklist, flow diagram, or structured text to guide authors in reporting a specific type of research, developed using explicit methodology.” A reporting guideline outlines the bare minimum of information that must be presented in a research report in order to provide a transparent and understandable explanation of what was done and what was discovered. Many reporting guidelines have been developed, and it has become important to select the most appropriate reporting guideline for a manuscript. Herein, I propose an algorithm for the selection of reporting guidelines. This algorithm was developed based on the research design classification system and the content presented for major reporting guidelines through the EQUATOR (Enhancing the Quality and Transparency of Health Research) network. This algorithm asks 10 questions: “is it a protocol,” “is it secondary research,” “is it an in vivo animal study,” “is it qualitative research,” “is it economic evaluation research,” “is it a diagnostic accuracy study or prognostic research,” “is it quality improvement research,” “is it a non-comparative study,” “is it a comparative study between groups,” and “is it an experimental study?” According to the responses, 16 appropriate reporting guidelines are suggested. Using this algorithm will make it possible to select reporting guidelines rationally and transparently.

Introduction

The IMARD (introduction, methods, results, and discussion) is the most commonly used document format when writing scientific articles. In the introduction, the reason and purpose of the study are usually reported. In the methods section, the time, place, process, materials, and participants of the study are described. The answer to the research question and the meaning/impact of the current results are reported in the results and discussion section. In other words, scientific papers should include an appropriate report of the purpose, as well as information about the validity, usefulness, and meaning of the research [1]. There are many cases in which improper reporting (underreporting, Misreporting, and selective reporting) occurs in actual papers, lowering the validity of the research [2].
The purpose of reporting guidelines is to reduce these problems, and a reporting guideline can be defined as “a checklist, flow diagram, or structured text to guide authors in reporting a specific type of research, developed using explicit methodology” [3]. Reporting guidelines were actively developed after the publication of CONSORT (Consolidated Standards of Reporting Trials), a reporting guideline for randomized controlled trials (RCTs), and there are now more than 500 reporting guidelines that can be used by research authors in the medical field. Almost all reporting guidelines are searchable and available through the EQUATOR (Enhancing the Quality and Transparency of Health Research; http://www.equator-network.org) network.
Researchers are the main users of the reporting guidelines, which can be utilized when writing manuscripts and protocols. Numerous reporting guidelines have been developed, and it has become important to select the most appropriate reporting guideline for a manuscript to be reviewed. However, a system that recommends appropriate reporting guidelines through tools such as algorithms is not yet available. The purpose of this study is to suggest an algorithm for selecting reporting guidelines.

Background for an Algorithm for the Selection of a Reporting Guideline

The currently developed reporting guidelines do not apply to all scientific studies. The EQUATOR network specifies the scope of reporting guidelines as health research. However, a classification of the developed reporting guidelines indicates that the actual scope includes human subjects and in vivo animal experiments. Of course, these reporting guidelines can also be applied even to studies without human subjects, as long as they are conducted using the same methodology as in human subject research. For example, a study of the data-sharing policies of academic journals could be reported using the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) guideline for reporting observational research, even though it is not a human subject study, since it can be viewed as a cross-sectional study. In the future, the scope of reporting guidelines may be expanded to other scientific fields.
Since the reporting guidelines developed to date deal with human subject research and in vivo animal experiments, an algorithm for selecting appropriate reporting guidelines can be suggested through a few questions.

Questions in the Algorithm for the Selection of Reporting Guidelines

Preliminary consideration

As mentioned above, since the reporting guidelines are limited to human studies and in vivo animal studies, it is necessary to review whether the research design of the manuscript under consideration corresponds to a human study or an in vivo animal study. If the answer to this question is “no,” then no reporting guidelines have been developed to date. If the answer to the preliminary consideration is “yes,” then an appropriate reporting guideline can be selected through the questions below.
  • • Is it a protocol?

  • • Is it secondary research?

  • • Is it an in vivo animal study?

  • • Is it qualitative research?

  • • Is it economic evaluation research?

  • • Is it a diagnostic accuracy study or prognostic research?

  • • Is it quality improvement research?

  • • Is it a non-comparative study?

  • • Is it a comparative study between groups?

  • • Is it an experimental study?

Is it a protocol?

In health research, a protocol is a written research plan. In the medical field, protocols are mainly prepared when conducting clinical trials or systematic reviews. The main reporting guideline for clinical trials is SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) [4], and the main reporting guideline for systematic literature reviews is PRISMA-P (Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols) [5].

Is it secondary research?

Research can be divided into primary and secondary research. Primary research is a research approach that directly collects data, and secondary research is a research approach that relies on existing data when conducting systematic investigations [6]. The main type of secondary research conducted in the medical field encompasses systematic literature reviews and clinical practice guidelines. The reporting guidelines suitable for systematic literature reviews are based on the PRISMA guideline [7]. Various extensions exist for PRISMA, including PRISMA-DTA (PRISMA for Diagnostic Test Accuracy) [8], PRISMA-ScR (PRISMA Extension for Scoping Reviews) [9], and PRISMA-S (PRISMA Extension for Reporting Literature Searches in Systematic Reviews) [10]. The reporting guidelines suitable for clinical practice guidelines are AGREE (Appraisal of Guidelines, Research and Evaluation) [11] and RIGHT (Reporting Tool for Practice Guidelines in Health Care) [12].

Is it an in vivo animal study?

Animal studies include in vitro studies and in vivo studies. The term in vitro, which means “in glass” in Latin, describes diagnostic procedures, scientific tests, and experiments that are carried out by researchers away from a living thing. An in vitro experiment takes place in a sterile setting, such as a test tube or Petri dish. The Latin term in vivo means “among the living.” It describes procedures, tests, and examinations that scientists carry out in or on a complete living organism, such as humans or laboratory animals [13]. In general, there are no appropriate reporting guidelines for in vitro studies, while the ARRIVE (Animal Research: Reporting of In Vivo Experiments) reporting guideline exists for in vivo studies [14].

Is it qualitative research?

Quantitative research deals with numbers and statistics when gathering and analyzing data, whereas qualitative research deals with words and meanings. The results of qualitative research are written to aid in understanding ideas, experiences, or concepts. A researcher can gain comprehensive knowledge on poorly understood subjects through this type of research. Common qualitative techniques include open-ended questions in interviews, written descriptions of observations, and literature reviews that examine concepts and theories [15]. Two major reporting guidelines for qualitative research are SRQR (Standards for Reporting Qualitative Research) [16] and COREQ (Consolidated Criteria for Reporting Qualitative Research) [17].

Is it economic evaluation research?

Economic evaluation research can be defined as “the process of systematic identification, measurement and valuation of the inputs and outcomes of two alternative activities, and the subsequent comparative analysis of these” [18]. For economic evaluation studies, the most appropriate reporting guideline is the CHEERS 2022 (Consolidated Health Economic Evaluation Reporting Standards 2022) [19].

Is it a diagnostic accuracy study or prognostic research?

A diagnostic test accuracy study provides evidence on how well a test correctly identifies or rules out disease and informs subsequent decisions about treatment for clinicians, their patients, and healthcare providers who interpret diagnostic accuracy studies for patient care [20]. The reporting guideline used to report research on diagnostic accuracy is STARD (Standards for Reporting Diagnostic Accuracy Studies) [21].
In general, prognosis-related papers should be reported according to the observational study reporting guideline (STROBE), but in the case of a prognostic prediction model, it should be reported according to TRIPOD (Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis) [22].

Is it quality improvement research?

Quality improvement is the framework used to systematically improve care. To reduce variation, achieve predictable results, and improve outcomes for patients, healthcare systems, and organizations, quality improvement aims to standardize processes and structure [23]. For quality improvement studies, the most appropriate reporting guideline is SQUIRE (Standards for Quality Improvement Reporting Excellence) [24].
If the answer to question 8 is “no,” the study design is an interventional study. An interventional research design is classified according to the questions of DAMI (Design Algorithm for Medical Literature on Intervention) [25].

Is it a non-comparative study?

According to the DAMI tool, the first question to ask is whether a study is analytical or descriptive. DAMI asks this question: “Were the primary outcomes compared according to intervention/exposure or the existence of a disease?” [25]. If the answer is “no,” a study is descriptive, and the corresponding research design is a case report and a case series. Case reports and case series are generally classified according to the number of reported cases, and studies reporting three or more cases are classified as case series [26].
For case reports, the representative reporting guideline is the CARE (Case Report Guidelines) [27]. There are no leading reporting guidelines for case series. However, since case series are mainly published in surgical journals, it is possible for them to use reporting guidelines developed for various surgical fields, including general surgery (the PROCESS [Preferred Reporting of Case Series in Surgery] guideline) [28], as well as case group study reporting guidelines in the field of plastic surgery [29].

Is it a comparative study between groups?

Comparative studies can be divided into within-group and between-group comparative studies. A within-group comparison refers to repeated measurements of the primary outcome among the same individuals or group at different time points [25]. DAMI’s question on this is, “Were the primary outcomes of different groups compared?” Research designs that correspond to a “yes” response to this question include before-after studies and interrupted time series research. Currently, there are no clear reporting guidelines for within-group comparative studies.

Is it an experimental study?

If the investigators determined study participants’ exposure to interventions, then the study is classified as an experimental study. In such studies, investigators directly control the intervention time, process, and administration. If the study participants are exposed to specific interventions without the direct control of investigators, then the study is classified as observational [25]. DAMI’s question on this is, “Did the investigators allocate study participants to each group?” If the answer to this question is “yes”, a study is classified as experimental, and if the assignment is randomized (“Was the group randomized?”), it is classified as a RCT. For RCTs, the most commonly used reporting guideline is the CONSORT 2010 Statement [30]. There are 33 extensions of CONSORT. Widely known and used examples include the reporting guidelines for clinical trials related to COVID-19 (the CONSERVE 2021 Statement) [31], RCTs related to artificial intelligence (CONSORT-AI Extension) [32], and RCTs conducted using cohorts and routinely collected data (CONSORT-ROUTINE) [33].
There is no reporting guideline for nonrandomized clinical trials in general, although the TREND (Transparent Reporting of Evaluations with Nonrandomized Designs) statement exists for use in behavioral and public health intervention clinical trials [34]. For observational studies, the STROBE statement should be used as a reporting guideline [35]. When using STROBE, an appropriate sub-checklist should be used in accordance with the representative observational study design (cohort studies, case-control studies, and cross-sectional studies). The DAMI question corresponding to cross-sectional studies is, “Were the data for exposure to the intervention and for primary outcomes collected concurrently?” The question that distinguishes cohort studies from case-control studies is, “Was each group organized on the basis of exposure to the intervention?”

Algorithm for Selecting Reporting Guidelines

Based on the answers to the above questions, an algorithm was constructed, as shown in Fig. 1. This algorithm should only be used for studies of human subjects or in vivo animal studies.

Conclusion

Peer reviewers, authors, and journals frequently use reporting guidelines. Reporting guidelines raise the standard of research that is published in biomedical journals. To make the best possible use of reporting guidelines, it is necessary to select the appropriate reporting guideline for a given study. The algorithm for selecting reporting guidelines presented in this paper will be helpful for this purpose. If the research designs and scope of research to which reporting guidelines are applied are expanded, the algorithm will also need to be updated.
Users must take care to ensure that the numerous new reporting guidelines are developed with the same level of scrutiny and rigor as more established guidelines and that the interventions that result are meaningful.

Conflict of Interest

No potential conflict of interest relevant to this article was reported.

Funding

The author received no financial support for this article.

Algorithm for the selection of reporting guidelines. RCT, randomized controlled trial; SPIRIT, Standard Protocol Items: Recommendations for Interventional Trials; PRISMA-P, Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols; PRISMA, Preferred Reporting Items for Systematic Review and Meta-Analysis; CPG, clinical practice guideline; AGREE, Appraisal of Guidelines for Research and Evaluation; RIGHT, Reporting Tool for Practice Guidelines in Health Care; ARRIVE, Animal Research: Reporting of In Vivo Experiments; SRQR, Standards for Reporting Qualitative Research; COREQ, Consolidated Criteria for Reporting Qualitative Research; CHEERS, Consolidated Health Economic Evaluation Reporting Standards; DTA, diagnostic test accuracy; STARD, Standards for Reporting Diagnostic Accuracy Studies; TRIPOD, Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis; SQUIRE, Standards for Quality Improvement Reporting Excellence; CARE, Case Report Guidelins; CONSORT, Consolidated Standards of Reporting Trials; NRT, nonrandomized trial; TREND, Transparent Reporting of Evaluations with Nonrandomized Designs; STROBE, Strengthening the Reporting of Observational Studies in Epidemiology.
kcse-287f1.jpg

Fig. 1.

References

1. Mateu Arrom L, Huguet J, Errando C, Breda A, Palou J. How to write an original article. Actas Urol Esp (Engl Ed) 2018;42:545-50. https://doi.org/10.1016/j.acuro.2018.02.011
crossref pmid

2. O’Leary JD, Crawford MW. Review article: reporting guidelines in the biomedical literature. Can J Anaesth 2013;60:813-21. https://doi.org/10.1007/s12630-013-9973-z
crossref pmid

3. Kim SY. Reporting guidelines. Korean J Fam Med 2009;30:62. https://doi.org/10.4082/kjfm.2009.30.1.62
crossref

4. Chan AW, Tetzlaff JM, Altman DG, et al. SPIRIT 2013 statement: defining standard protocol items for clinical trials. Ann Intern Med 2013;158:200-7. https://doi.org/10.7326/0003-4819-158-3-201302050-00583
crossref pmid pmc

5. Moher D, Shamseer L, Clarke M, et al. Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) 2015 statement. Syst Rev 2015;4:1. https://doi.org/10.1186/2046-4053-4-1
crossref pmid pmc

6. Formplus Blog. Primary vs secondary research methods: 15 key differences [Internet]. Formplus Blog. 2022 [cited 2022 Sep 30]. Available from: https://www.formpl.us/blog/primary-secondary-research


7. Page MJ, McKenzie JE, Bossuyt PM, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. PLoS Med 2021;18:e1003583. https://doi.org/10.1371/journal.pmed.1003583
crossref pmid pmc

8. McInnes MD, Moher D, Thombs BD, et al. Preferred Reporting Items for a Systematic Review and Meta-analysis of Diagnostic Test Accuracy Studies: the PRISMA-DTA statement. JAMA 2018;319:388-96. https://doi.org/10.1001/jama.2017.19163
crossref pmid

9. Tricco AC, Lillie E, Zarin W, et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med 2018;169:467-73. https://doi.org/10.7326/M18-0850
crossref pmid

10. Rethlefsen ML, Kirtley S, Waffenschmidt S, et al. PRISMA-S: an extension to the PRISMA Statement for Reporting Literature Searches in Systematic Reviews. Syst Rev 2021;10:39. https://doi.org/1186/s13643-020-01542-z
crossref pmid pmc

11. Brouwers MC, Kerkvliet K, Spithoff K; AGREE Next Steps Consortium. The AGREE Reporting Checklist: a tool to improve reporting of clinical practice guidelines. BMJ 2016;352: i1152; https://doi.org/10.1136/bmj.i1152
crossref pmid

12. Chen Y, Yang K, Marusic A, et al. A reporting tool for practice guidelines in health care: the RIGHT statement. Ann Intern Med 2017;166:128-32. https://doi.org/10.7326/M16-1565
crossref pmid

13. Medical News Today. What is the difference between in vivo and in vitro? [Internet]. Brighton, UK: Healthline Media UK Ltd; 2022 [cited 2022 Oct 3]. Available from: https://www.medicalnewstoday.com/articles/in-vivo-vsin-vitro


14. Percie du Sert N, Hurst V, Ahluwalia A, et al. The ARRIVE guidelines 2.0: updated guidelines for reporting animal research. PLoS Biol 2020;18:e3000410. https://doi.org/10.1371/journal.pbio.3000410
crossref pmid pmc

15. Raimo Streefkerk. Qualitative vs. quantitative research: differences, examples & methods [Internet]. Scribbr; 2022 [cited 2022 Oct 1]. Available from: https://www.scribbr.com/methodology/qualitative-quantitative-research/


16. O’Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med 2014;89:1245-51. https://doi.org/10.1097/ACM.0000000000000388
crossref pmid

17. Tong A, Sainsbury P, Craig J. Consolidated Criteria for Reporting Qualitative Research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care 2007;19:349-57. https://doi.org/10.1093/intqhc/mzm042
crossref pmid

18. Wikipedia. Economic evaluation [Internet]. Wikipedia. 2017 [cited 2022 Oct 1]. Available from: https://en.wikipedia.org/wiki/Economic_evaluation


19. Husereau D, Drummond M, Augustovski F, et al. Consolidated Health Economic Evaluation Reporting Standards 2022 (CHEERS 2022) statement: updated reporting guidance for health economic evaluations. Value Health 2022;25:3-9. https://doi.org/10.1016/j.jval.2021.11.1351
crossref pmid

20. Mallett S, Halligan S, Thompson M, Collins GS, Altman DG. Interpreting diagnostic accuracy studies for patient care. BMJ 2012;345:e3999. https://doi.org/10.1136/bmj.e3999
crossref pmid

21. Cohen JF, Korevaar DA, Altman DG, et al. STARD 2015 guidelines for reporting diagnostic accuracy studies: explanation and elaboration. BMJ Open 2016;6:e012799. https://doi.org/10.1136/bmjopen-2016-012799
crossref pmid pmc

22. Collins GS, Reitsma JB, Altman DG, Moons KG. Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD): the TRIPOD statement. Ann Intern Med 2015;162:55-63. https://doi.org/10.7326/M14-0697
crossref pmid

23. Centers for Medicare & Medicaid Services (CMS). Quality measurement and quality improvement [Internet]. Baltimore, MD: CMS; 2021 [cited 2022 Oct 1]. Available from: https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/MMS/Quality-Measureand-Quality-Improvement-


24. Ogrinc G, Davies L, Goodman D, Batalden P, Davidoff F, Stevens D. SQUIRE 2.0 (Standards for QUality Improvement Reporting Excellence): revised publication guidelines from a detailed consensus process. BMJ Qual Saf 2016;25:986-92. https://doi.org/10.1136/bmjqs-2015-004411
crossref pmid pmc

25. Seo HJ, Kim SY, Lee YJ, et al. A newly developed tool for classifying study designs in systematic reviews of interventions and exposures showed substantial reliability and validity. J Clin Epidemiol 2016;70:200-5. https://doi.org/10.1016/j.jclinepi.2015.09.013
crossref pmid

26. Boston University Medical Campus and Boston Medical Center: Institutional Review Board. Case reports and case series [Internet]. Boston, MA: Boston University; [cited 2022 Oct 30]. Available from: https://www.bumc.bu.edu/irb/submission-requirements/special-submission-requirements /case-reports-and-case-series/


27. Gagnier JJ, Kienle G, Altman DG, et al. The CARE guidelines: consensus-based clinical case report guideline development. J Clin Epidemiol 2014;67:46-51. https://doi.org/10.1016/j.jclinepi.2013.08.003
crossref

28. Agha RA, Sohrabi C, Mathew G, et al. The PROCESS 2020 Guideline: updating consensus Preferred Reporting of CasESeries in Surgery (PROCESS) Guidelines. Int J Surg 2020;84:231-5. https://doi.org/10.1016/j.ijsu.2020.11.005
crossref pmid

29. Coroneos CJ, Ignacy TA, Thoma A. Designing and reporting case series in plastic surgery. Plast Reconstr Surg 2011;128:361e-8e. https://doi.org/10.1097/PRS.0b013e318221f2ec
crossref pmid

30. Schulz KF, Altman DG, Moher D; CONSORT Group. CONSORT 2010 statement: updated guidelines for reporting parallel group randomized trials. Ann Intern Med 2010;152:726-32. https://doi.org/10.7326/0003-4819-152-11-201006010-00232
crossref pmid

31. Orkin AM, Gill PJ, Ghersi D, et al. Guidelines for reporting trial protocols and completed trials modified due to the COVID-19 pandemic and other extenuating circumstances: the CONSERVE 2021 statement. JAMA 2021;326:257-65. https://doi.org/10.1001/jama.2021.9941
crossref pmid

32. Liu X, Rivera SC, Moher D, Calvert MJ, Denniston AK; SPIRIT-AI and CONSORT-AI Working Group. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI Extension. BMJ 2020;370:m3164. https://doi.org/10.1136/bmj.m3164
crossref pmid pmc

33. Kwakkenbos L, Imran M, McCall SJ, et al. CONSORT extension for the reporting of randomised controlled trials conducted using cohorts and routinely collected data (CONSORT-ROUTINE): checklist with explanation and elaboration. BMJ 2021;373:n857; https://doi.org/10.1136/bmj.n857
crossref pmid

34. Des Jarlais DC, Lyles C, Crepaz N; TREND Group. Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: the TREND statement. Am J Public Health 2004;94:361-6. https://doi.org/10.2105/ajph.94.3.361
crossref pmid pmc

35. von Elm E, Altman DG, Egger M, et al. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Ann Intern Med 2007;147:573-7. https://doi.org/10.7326/0003-4819-147-8-200710160-00010
crossref pmid

Editorial Office
The Korea Science & Technology Center 2nd floor,
22 Teheran-ro 7-gil, Gangnam-gu, Seoul 06130, Korea
TEL : +82-2-3420-1390   FAX : +82-2-563-4931   E-mail : kcse@kcse.org
Copyright © Korean Council of Science Editors.           Developed in M2PI