Warning: fopen(/home/virtual/kcse/journal/upload/ip_log/ip_log_2024-03.txt): failed to open stream: Permission denied in /home/virtual/lib/view_data.php on line 88 Warning: fwrite() expects parameter 1 to be resource, boolean given in /home/virtual/lib/view_data.php on line 89 Peer review at the beginning of the 21st century

Peer review at the beginning of the 21st century

Article information

Sci Ed. 2014;1(1):4-8
Publication date (electronic) : 2014 February 13
doi : https://doi.org/10.6087/kcse.2014.1.4
Editorial and Publishing Consultant, York, United Kingdom
Correspondence to Irene Hames   E-mail: irene.hames@gmail.com
This is a republication of author’s book chapter which was originally published as ‘Hames I. Peer review at the beginning of the 21st century. In: Smart P, Maisonneuve H, Polderman A, editors. EASE science editors’ handbook. 2nd ed. Cornwall: European Association of Science Editors; 2013. p. 133-6, with the author's permission. The abstract has been added by the author for the republication.
Received 2013 October 3; Accepted 2013 November 11.

Abstract

Vigorous debate currently surrounds peer review, and polarized views are often expressed. Despite criticisms about the process, studies have found that it is still valued by researchers, with rigorous peer review being rated by authors as the most important service they expect to receive when paying to have their papers published open access. The expectations of peer review and what it can achieve need, however, to be realistic. Peer review is also only as good and effective as the people managing the process, and the large variation in standards that exists is one of the reasons some of the research and related communities have become critical of and disillusioned with the traditional model of peer review. The role of the editor is critical. All editors must act as proper editors, not just moving manuscripts automatically through the various stages, but making critical judgements throughout the process to reach sound and unbiased editorial decisions. New models and innovations in peer review are appearing. Many issues, however, remain the same: rigorous procedures and high ethical standards should be in place, those responsible for making decisions and managing the process need to be trained to equip them for their roles and responsibilities, and systems need to be adapted to deal with new challenges such as the increasing amounts of data being generated and needing to be taken into account when assessing the validity and soundness of work and the conclusions being drawn.

Introduction

Peer review is currently a topic of vigorous debate, probably more so than at any time since its origins in the form we know it today nearly 300 years ago. Views are often polarized, with some people considering peer review to be ‘broken’ and wanting to abolish it completely, others still viewing it as an important part of the scholarly publishing process. Two large international surveys [1-3] have found that although researchers value peer review considerably, there is a level of dissatisfaction: however, they generally want to see peer review improved, not replaced. Interestingly, the Taylor & Francis Open Access Survey (March 2013, with more than 14,000 respondents) [4] found that rigorous peer review was the service authors rated the most important when asked about the importance of services they expect to receive when paying to have their papers published open access. This was rated as more important than either rapid publication or rapid peer review.

What do we mean by ‘peer review’? Put simply, ‘peer review in scholarly publishing is the process by which research output is subjected to scrutiny and critical assessment by individuals who are experts in those areas’ [5], traditionally taking place before publication. This can be achieved in a number of ways, but the basis of all is ‘scrutiny and critical assessment by experts’. The scale of the scholarly publishing enterprise is enormous, with around 28,100 active peer-reviewed journals publishing around 1.7 to 1.8 million articles annually [6]. For those published articles alone, there have probably been around 4 million reviews done. But as a certain proportion will have been submitted to and rejected from one or more other journals before being accepted for publication, the true number is likely to be considerably greater. It has been estimated that, considering just 12,000 of the active peer-reviewed journals, around 15 million hours annually are spent reviewing manuscripts that are rejected [7].

Scholarly publishing is going through a period of dramatic change and facing considerable challenges. New publication models are being introduced, and new players are entering the field. Peer review is, in parallel, experiencing similar issues, undergoing both disruption and innovation. Support for open access journal publishing is growing, with many considering that it is no longer a question of ‘if’, but ‘when and how’. Despite its entry into the mainstream and adoption as a requirement by some research funders for the publication of work they have funded (e.g., the Wellcome Trust and the Research Councils UK), a number of misconceptions remain, particularly that peer review in open access journals is in some way inferior to that in traditional subscription journals. Generalizations about peer-review quality and access/business models can’t be made. Publishing models with article processing charges have, however, presented opportunities for exploitation for profit by questionable journals and publishers who offer very little, if anything, in the way of peer review [8]. The widespread introduction of an indicator such as the Journal Transparency Index suggested by Marcus and Oransky [9] would help bring much-needed transparency, and aid authors who are looking for reputable journals in which to publish. All journals should describe their editorial structure and peer-review processes, even if they don’t include all the things suggested by Marcus and Oransky [9].

Criticisms of peer review have been around for a long time. These range from grumbles by individual researchers when they have bad experiences (and all will, inevitably, at some stage of their careers) to more widespread general complaints, for example that peer review is inconsistent and prone to bias, slow and expensive, open to abuse, and largely a lottery [10,11]. Peer review can at times ‘fail’ or get mired in a series of escalating problems in even the best-run journals, but it should be a prime aim of journals to have their communities basically happy with the services they provide.

Sometimes the criticism is made that reviewers are ‘working for free’. Peer review is, however, a reciprocal process, as authors and reviewers are mostly the same community, and so researchers benefit from expert reviews as well as provide them. Fairness in the system does, however, rely on everyone doing their fair share of reviewing. Editors can’t do much about this in the wider journal ecosystem, but they can ensure that at their own journals there is a good balance between submitting and reviewing manuscripts.

Realistic Expectations of Peer Review

Being labelled as ‘peer reviewed’ doesn’t mean that the work reported can be considered the absolute ‘truth’ and free of all errors. It means that the report has been looked at and critically assessed by appropriate experts, i.e., people with the relevant expertise and without any conflicting interests that might bias their assessment, hopefully to the best of their ability, and considered suitable for publication. Before publication, authors have usually been asked to address deficiencies, explain discrepancies and clarify any ambiguities, so papers (and the work behind them) get improved as a result. Peer review is, however, only as good and effective as the people managing the process.

Experienced and knowledgeable editors and editorial staff bring subtlety and sophistication to the endeavour, coupled with impartiality and common sense. Bad or inexperienced editors and staff can cause distress and anger, and bring the system into disrepute.

What, realistically, can we expect of peer review? Ideally, it should provide the following [5,12]:

  • 1. Prevent the publication of bad work - filter out studies that have been poorly conceived, designed or executed;

  • 2. Check (as far as possible from the submitted material) that the research reported has been carried out well and there are no flaws in the design or methodology;

  • 3. Ensure that the work is reported correctly and unambiguously, complying with reporting guidelines where appropriate, with acknowledgement to the existing body of work and due credit given to the findings and ideas of others;

  • 4. Ensure that the results presented have been interpreted correctly and all possible interpretations considered;

  • 5. Ensure that the results are not too preliminary or too speculative, but at the same time not block innovative new research and theories;

  • 6. Provide editors with evidence to make judgements as to whether articles meet the selection criteria for their particular publications, for example on the level of general interest, novelty or potential impact;

  • 7. Provide authors with quality and constructive feedback ;

  • 8. Generally improve the quality and readability of articles;

  • 9. Help maintain the integrity of the scholarly record.

It is not the role of journals to police research integrity or determine if misconduct has occurred, but editors do have a duty to look into all allegations or suspicions of misconduct. If they find grounds for these, they should refer cases to the individuals’ institutions for investigation. The Committee on Publication Ethics (COPE ; http://www.publicationethics.org) provides guidance and resources for handling cases of suspected misconduct, including a set of flowcharts that cover many of the common situations editors come across.

Critical Role of the Editor

Editors play a critical role in the peer-review process and in the level of community satisfaction with that process. When they fall short of what is expected of a good editor, dissatisfaction results and complaints st0art to come in. Dissatisfaction may also be voiced on blogs and social media postings, along with specific details, perhaps even the reviewers’ reports and editorial correspondence. The scale and extent of the criticism can grow quickly as people find they are not alone in their criticisms and negative experiences. One of the most common criticisms is that some editors are not making the critical judgements that are needed on reviewers’ reports, leading to authors being asked to include unrealistic numbers of additional experiments as a condition of acceptance. This has been referred to as the ‘tyranny of reviewer experiments’ [13]. The following comment from a senior and well-respected researcher [14] summarizes well how some researchers feel:

‘Unfortunately, all too often editors relinquish their responsibilities and treat the peer review process as a vote, but this is a distortion of the real function of peer review, which should be to offer advice to the editor and the author... I do think the real problem is editors... Increasingly, one sees editors who don’t use any judgement at all, but just keep going back to reviewers until there is agreement.’

Being a good editor means doing more than just moving manuscripts automatically through peer review, and more than just ‘counting votes’. It also means not passing responsibilities on to reviewers that are the editor’s. Good editors and their editorial boards and staff screen submissions to make sure they are actually within the scope of the journal and that the standard of language is of sufficient quality for the work to be understood. Reviewers are right to complain and get frustrated when pre-review screening seems inadequate, and they feel they are doing the editor’s job. Editors have to act as editors, making critical judgements based on the reviews and recommendations of reviewers chosen to help them make decisions on manuscripts (reviewers advise, editors make the decisions), and always able to put forward the reasons behind their decisions.

Editors are, in the main, active researchers, applying for grants, submitting their own work for publication, and competing for jobs with others in their communities. They may also have ties with industry and other commercial bodies. It is therefore inevitable that they will sometimes find themselves in situations that may conflict with the responsibilities of their editor role. It is essential these are recognized, disclosed, and handled appropriately. Editors should not be involved in the handling and decision making of any manuscripts where they have, or may be perceived to have, potentially conflicting interests. All manuscripts that editors submit to their own journals should be handled by another member of the editorial board, and all details of their handling and review should be kept confidential from them. The COPE code of conduct for journal editors [15] provides guidance on the minimum standards to which all COPE members are expected to adhere, and these are a useful summary of good practice for all editors.

Why Problems Arise

One of the reasons there are criticisms of peer review is that standards are very variable. The processes at some journals leave a lot to be desired, others have problems achieving consistency in decision making, and some have questionable practices. There are not only good editors and bad editors, but also inexperienced ones and those who may have been in position for a considerable time but who still don’t know what good and ethical editorial practice is. Surprisingly, especially considering the importance of publication records for researchers, careers, many editors don’t receive any training before taking up their roles. They are thrust into them without being equipped for the responsibilities.

Peer review relies on trust and operates under the assumption that everyone is behaving honestly. Problems arise when questionable or unethical behaviour occurs and there is a breakdown in that trust. All the parties involved in peer review - authors, reviewers and editors - are open to misbehaviour, along the whole spectrum, from questionable actions and bias through to what can be classified as misconduct. New practices come along that can surprise even the most editorially experienced individuals. For example, the cases of ‘fake’ reviewers and ‘fake’ reviews that surfaced in 2012 [16] (and see the ‘faked emails’ category on the Retraction Watch blog, http://retractionwatch.wordpress.com/). The authors provided journals with suggested reviewers for their manuscripts who either didn’t exist (they were false identities), along with email addresses that were their own accounts or those of colleagues, or were real people but were accompanied by email addresses to accounts that they had created for them, and which had nothing to do with those people. They then returned reviews for their own manuscripts via these accounts. These cases were eventually found out and the papers retracted, accompanied by notes that the peer-review process for the articles had been found to have been compromised and inappropriately influenced by the corresponding authors, and so the findings and conclusions could not therefore be relied upon. What is of concern is that it became apparent over 2012 that a large number of cases were involved (at different journals, in different disciplines, and from different publishers), with 28 articles having to be retracted from one author alone [16].

How could this happen, and to this extent? One has to question whether the basic checks were done to confirm identity, contact details and reviewer suitability before reviewers were sent manuscripts to review. There were also suggestions that some journals had used only author-suggested reviewers, which shouldn’t generally happen. When cases like this are exposed, questions inevitably arise about the value and rigour of peer review, and confidence in it is dented. Partly as a response to this sort of behaviour, COPE has produced the Ethical guidelines for peer reviewers [17,18] to help set out ethical standards and guidelines for reviewers. Besides providing guidance for reviewers, journals and editors, it is hoped they will be used as an educational resource in the training of researchers, who often come to the reviewer role without guidance on peer review.

Conclusion

Despite criticisms about its failings, many feel that peer review-the opinion of experts-will always be important in assessing the outputs of research. The UK House of Commons Science and Technology Committee inquiry into peer review (paragraph 277 [19]) concluded that progress in science relies on being able to build on robust and accurate previous work, and that: ‘Peer review in scholarly publishing, in one form or another, is crucial to the reputation and reliability of scientific research’ (my italics). There are different ways to get expert opinion, and new models are exploring possibilities and bringing innovation to peer review [20] and the dissemination and publication of research output (moving beyond just journal articles). Whatever the model, there is a need to ensure that standards are high, editors (or those responsible for making decisions and managing the process) are trained and supported, and researchers are educated in research integrity and publication ethics. Peer review is also facing new challenges as large amounts of data are being generated and needing to be reviewed or viewed with research reports. New standards and workflows are needed. Where to put data during review for confidential access is an issue, but there are organizations such as Dryad (http://datadryad.org/) where data can be made securely and confidentially available for peer review.

Notes

No potential conflict of interest relevant to this article was reported.

References

1. Ware M, Monkman M. Peer review in scholarly journals: perspective of the scholarly community: an international study [Internet] Cambridge: Publishing Research Consortium; 2008 [cited 2013 May 18]. Available from: http://publishingresearch.net/index.php?view=download&alias=8-peer-review-full-prc-report&option=com_docman&Itemid=815.
2. Sense about Science. Peer review survey 2009: full report [Internet] London: Sense about Science; 2009 [cited 2013 May 18]. Available from: http://www.senseaboutscience.org/data/files/Peer_Review/Peer_Review_Survey_Final_3.pdf.
3. Mulligan A, Hall L, Raphael E. Peer review in a changing world: an international study measuring the attitudes of researchers. J Am Soc Inf Sci Technol 2013;64:132–61. http://dx.doi.org/10.1002/asi.22798.
4. Taylor & Francis Group. Open Access Survey: exploring the views of Taylor & Francis and Routledge authors [Internet]. London: Taylor & Francis; 2013 [cited 2013 May 18]. Available from: http://www.tandf.co.uk/journals/pdf/open-access-survey-march2013.pdf.
5. Hames I. Peer review in a rapidly evolving publishing landscape. In : Cambell R, Pentz E, Borthwick I, eds. Academic and professional publishing Cambridge: Chandos Publishing; 2012. p. 15–52.
6. Ware M, Mabe M. The STM report: an overview of scientific and scholarly journal publishing [Internet]. Hague: International Association of Scientific, Technical and Medical Publishers; 2012. [cited 2013 May 18]. Available from: http://www.stm-assoc.org/2012_12_11_STM_Report_2012.pdf.
7. Rubriq Blog. How we found 15 million hours of lost time [Internet]. Durham: Rubriq Blog; 2013. [cited 2013 Jul 18]. Available from: http://blog.rubriq.com/2013/06/03/how-we-found-15-million-hours-of-lost-time/.
8. Beall J. Beall’s list: potential, possible, or probable predatory scholarly open-access publishers [Internet]. [place unknown]: Scholarly Open Access; 2012. [cited 2013 May 16]. Available from: http://scholarlyoa.com/publishers/.
9. Marcus A, Oransky I. Bring on the transparency index [Internet]. Midland, ON: The Scientist; 2012. [cited 2013 May 16]. Available from: http://www.the-scientist.com/?articles.view/articleNo/32427/title/Bring-On-the-Transparency-Index/.
10. Smith R. Peer review: a flawed process at the heart of science and journals. J R Soc Med 2006;99:178–82.
11. Smith R. Classical peer review: an empty gun. Breast Cancer Res 2010;12(Suppl 4):S13. http://dx.doi.org/10.1186/bcr2742.
12. Hames I. Peer review and manuscript management in scientific journals: guidelines for good practice. Malden, MA: Wiley-Blackwell; 2007.
13. Ploegh H. End the wasteful tyranny of reviewer experiments. Nature 2011;472:391. http://dx.doi.org/10.1038/472391a.
14. Bishop D. In defence of peer review: response to Smith R. Classical peer review: an empty gun. Breast Cancer Res 2010. 12(Suppl 4)S13 [Internet]. Breast Cancer Research; 2011 [cited 2013 Nov 5]. Available from: http://breast-cancer-research.com/content/12/S4/S13/comments#455683.
15. Committee on Publication Ethics. Code of conduct and best practice guidelines for journal editors [Internet]. [place unknown]: Committee on Publication Ethics; 2011. [cited 2013 Jun 4]. Available from: http://publicationethics.org/files/Code_of_conduct_for_journal_editors_0.pdf.
16. Oransky I. Retraction count grows to 35 for scientist who faked emails to do his own peer review [Internet]. [place unknown]: Retraction Watch; 2012 [cited 2013 May 18]. Available from: http://retractionwatch.wordpress.com/2012/09/17/retraction-count-for-scientist-who-faked-emails-to-do-his-own-peer-review-grows-to-35/#more-9761.
17. Hames I, ; Committee on Publication Ethics. Ethical guidelines for peer reviewers [Internet]. [place unknown]: Committee on Publication Ethics; 2013. [cited 2013 May 18]. Available from: http://publicationethics.org/files/Ethical_guidelines_for_peer_reviewers_0.pdf.
18. Hames I. COPE’s new ethical guidelines for peer reviewers: background, issues, and evolution [Internet]. Mantua, NJ: International Society of Managing and Technical Editors; 2013. [cited 2013 May 17]. Available from: http://www.ismte.org/Shared_Articles-COPEs_new_Ethical_Guidelines_for_Peer_Reviewers_background_issues_and_evolution.
19. House of Commons, Science and Technology Committee. Peer review in scientific publications: eighth report of session 2010-12 [Internet]. London: The Stationery Office Limited; 2011. [cited 2013 May 18]. Available from: http://www.publications.parliament.uk/pa/cm201012/cmselect/cmsctech/856/856.pdfwww.publications.parliament.uk/pa/cm201012/cmselect/cmsctech/856/85602.htm.
20. Hames I. The changing face of peer review. In : Smart P, Maisonneuve H, Polderman A, eds. Science editors’ handbook 2nd edth ed. Cornwall: European Association of Science Editors; 2013. p. 149–51.

Article information Continued