The Peer Review Congress was first held in Chicago in September 1989, and has been held every 4 years since. This year’s venue was also Chicago; the congress is usually held in Chicago because it is the location of the American Medical Association, which supervises the meeting. This year’s congress was co-hosted by the JAMA, The BMJ, and the Meta Research Innovation Center at Stanford, and the theme was “Enhancing the quality and credibility of science.” Forty-five plenary session reports, 84 poster session reports, and 5 invited talks related to the theme were presented over the course of 3 days.
The plenary reports were distributed among 12 sessions, with 2 morning sessions and 2 afternoon sessions on each of the 3 days of the meeting. The moderator of each session introduced the presenters, and time for questions and answers was allocated after each presentation. The topics for each session were as follows: bias associated with conflict of interest and peer review, bias in reporting and publication of research, integrity and misconduct, data sharing, quality of reporting, quality of the scientific literature, trial registration, funding/grant review, peer review innovations, editorial and peer-review process innovations, prepublication and postpublication issues, and postpublication issues.
Each session’s title corresponded to the topics of the 3 or 4 plenary reports that it included, and each participant received a copy of the program book with abstracts. All abstracts are freely available and can be downloaded from the conference website (https://peerreviewcongress.org/program-information).
The title of the highly anticipated first session was “Bias associated with conflict of interests and peer review.” Topics such as conflicts of interest, the gender of reviewers, and financial influences on the results of systematic reviews were covered in the session. One of the reports conducted a sampling of 1,650 papers indexed in Medline in 2016 to determine whether they had properly reported conflicts of interest according to the recommendations of the International Committee of Medical Journal Editors, and it turned out that only 1 out of 5 papers had properly followed those recommendations. It was also found that 130 unsystematized phrases were used for disclosure, which underscores the need for a new method to verify more easily whether the recommendations have been implemented. Another report researched 128,454 papers reviewed by Nature; the author separated the papers into single-blind and double-bind studies, and investigated the gender of the corresponding authors, the ratio of countries, and the acceptance status. Yet another report looked into the gender of reviewers in 20 journals in the field of earth and environmental sciences, which annually reviewed 24,000 papers and published 6,000 of them. The percentage of female lead authors, corresponding authors, and editors was investigated, and it was concluded that females participated less than males. The focus on these issues was novel.
In the “Bias in reporting and publication of research” session, I encountered the term “spin.” “Spin” is synonymous with “overinterpretation,” and it refers to the misinterpretation of data in literature-based papers such as systematic reviews. Systematic reviews play an important role in developing guidelines and policies based on the available evidence, and if either the data or the interpretation is flawed, the resultant recommendation is likely to have a negative impact on patients. One of the reports indicated that spin was commonly found in the abstracts, results, and conclusions of clinical research studies evaluating the efficacy of biological markers of ovarian cancer. It follows that a strategy is needed to control spin and check the influence of exaggerated reports. In the report, “Bias associated with publication of interim results of randomized trials: a systematic review,” papers that were published over the course of 10 years with terms such as ‘interim,’ ‘not mature,’ and ‘immature’ in their titles or abstracts were researched. One of the study’s suggestions to reduce bias was to publish only the final results and to refrain from publishing interim results before completing the research.
Especially interesting to me was the session on “Integrity and misconduct.” One of the reports investigated cases where the subject papers of a meta-analysis or systematic review were retracted after the meta-analysis was published. The retracted papers should not have been included, and could have affected the results of the meta-analysis or systematic review. The author found 3,834 retracted papers on Web of Science, and investigated the meta-analyses or systematic reviews that cited those papers. It turned out that 17 meta-analyses included retracted papers in their analyses, and the author remarked that he was planning to conduct a follow-up study on how the exclusion of retracted papers would influence the effect. Additionally, the American Physiological Society presented on their quality control efforts in terms of the digital images in papers, and described how they edit out unclear or improper images during the editing process. They evaluated the images of photographs, western blots, and DNA/RNA gels in papers published from 2009 to 2016, and emphasized that their quality control processes aimed at integrity effectively decreased the number of corrigenda.
In the “Data sharing” session, there was a report on authors’ intentions and experiences regarding data sharing. The report analyzed 3 journals; PLOS Medicine required all the authors who submitted papers reporting clinical research to share their data, while The BMJ and Annals of Internal Medicine had a policy requiring the authors to state whether they were willing to share the data. When authors affirmed their willingness to share, The BMJ and Annals of Internal Medicine contacted the authors 1 year after the papers’ publication and inquired about their plan for sharing, their receipt of sharing requests, and their efforts to respond to those requests. It was found that when authors received a request indicating that the requester intended to publish a similar analysis, the authors’ willingness to share inevitably decreased.
As expected from the name of the Peer Review Congress, various issues were discussed regarding peer review. In the “Peer review innovation” session, a report researched 1 year of papers submitted to the journals published by the Institute of Physics, and surveyed the authors’ satisfaction and opinions regarding double-blind peer review. The survey also explored differences among various scientific fields, which made me wonder whether expanding this idea could lead to the adoption of more efficient or more preferable methods in each field. Although double-blind peer review has been hitherto the default form, the report’s research into open review and investigation into which peer review form (double-blind, single-blind, or open review) would most increase the likelihood of reviewers agreeing to review a paper hinted at the increasing interest in and necessity of the open review form. Another report was about an educational blog for emergency medicine, ‘ALiEM,’ which peer-reviews its posts. On the assumption that each post’s peer review status and its text arrangement would affect the usage patterns, the report analyzed usage patterns, such as blog visitors’ post reading time and the bounce rate through Google Analytics. Although the results were not statistically meaningful, its approach to the open peer review process and content presentation was quite novel.
Lunch at the Peer Review Congress was not a buffet, enabling the participants at each table to have casual conversations with other participants. Although 1 hour was allotted to the poster sessions in the afternoon, the poster room was just in front of the dining hall and people could look around in the meantime. People were more interested in the posters than at any other conference that I have attended in the past (Fig. 1). A total of 84 posters were displayed over the course of 2 days, and the sheer number of them made it hard to take a close look or to have an in-depth conversation with the poster presenters, despite the separate time that was provided for viewing them. On September 12, I presented a poster entitled “Post-retraction citations in Korean medical journals.” The poster was a summary of a paper that I had co-authored wit Sun Huh, Soo Young Kim, and Hye-Min Cho. Many people stopped by to express their interest in it, most notably participants from the National Library of Medicine. Small A4-size hard copies were not prepared, yet many participants asked for the content through email later and expressed their interest in the topic. Their strong interest was probably because no other posters at this year’s Peer Review Congress dealt with retraction; I could see that many participants were taking the issue of retraction seriously (Fig. 2).
The conference had exceptionally diverse topics and presentations. It was easy to recognize the necessity and purpose of all these presentations, but it was astonishing how the presenters managed to think of such a range of viewpoints. This underscored to me, once again, that they were ahead of us in the field of academic publishing. Above all, the size of the conference, which more than 500 people attended, was surprising. Participants were asked to indicate whether they had attended the previous 7 conferences, and it turned out that many people were first-time participants. Four people, including myself, attended from Korea; it was delightful to see Prof. Sung-Tae Hong (Seoul National University College of Medicine, editor-in-chief of Journal of Korean Medical Science), Prof. Se-Jeong Oh (The Catholic University of Korea College of Medicine, former editor-in-chief of Journal of Breast Cancer), and Prof. Pan Dong Ryu (Seoul National University College of Medicine, editor-in-chief of Journal of Veterinary Science) at the conference.
Many presenters at the plenary report sessions were professors, but there were also many editors from publishing companies and some relatively young graduate students. Every presentation was directly followed by a question-and-answer session; whereas only a few questions are usually asked at Korean conferences, at the Peer Review Congress, you could frequently see many participants lining up behind microphones to ask questions. It was novel to see so many people at each presentation freely asking questions and expressing their ideas in a very forthright and natural atmosphere.
In the closing session, a slideshow was played in dedication to Mr. Drummond Rennie, the chair of the Peer Review Congress, who has worked for decades in the service of JAMA. All the participants stood up to express their gratitude for his hard work, which made me see again that an editor’s service and devotion are essential for an academic journal’s development. With the belief that all these efforts lead to the further development of academic publishing, I resolve to pay continuous attention to issues related to peer review and academic publishing in the future.
Notes
-
No potential conflict of interest relevant to this article was reported.
Acknowledgements- This work was supported by a travel grant from the Korean Council of Science Editiors (2017).
Fig. 1.Participants browsing the posters presented at the Eighth International Congress on Peer Review and Scientific Publication.
Fig. 2.Participants taking a close look at my poster.
Citations
Citations to this article as recorded by