Skip Navigation
Skip to contents

Science Editing : Science Editing



Page Path
HOME > Sci Ed > Volume 1(1); 2014 > Article
Peer review at the beginning of the 21st century
Irene Hamesorcid
Science Editing 2014;1(1):4-8.
Published online: February 13, 2014

Editorial and Publishing Consultant, York, United Kingdom

Correspondence to Irene Hames   E-mail:
This is a republication of author’s book chapter which was originally published as ‘Hames I. Peer review at the beginning of the 21st century. In: Smart P, Maisonneuve H, Polderman A, editors. EASE science editors’ handbook. 2nd ed. Cornwall: European Association of Science Editors; 2013. p. 133-6, with the author's permission. The abstract has been added by the author for the republication.
• Received: October 3, 2013   • Accepted: November 11, 2013

Copyright © the Author

This is an open access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

  • 158 Download
  • 4 Web of Science
  • 7 Crossref
  • 9 Scopus
  • Vigorous debate currently surrounds peer review, and polarized views are often expressed. Despite criticisms about the process, studies have found that it is still valued by researchers, with rigorous peer review being rated by authors as the most important service they expect to receive when paying to have their papers published open access. The expectations of peer review and what it can achieve need, however, to be realistic. Peer review is also only as good and effective as the people managing the process, and the large variation in standards that exists is one of the reasons some of the research and related communities have become critical of and disillusioned with the traditional model of peer review. The role of the editor is critical. All editors must act as proper editors, not just moving manuscripts automatically through the various stages, but making critical judgements throughout the process to reach sound and unbiased editorial decisions. New models and innovations in peer review are appearing. Many issues, however, remain the same: rigorous procedures and high ethical standards should be in place, those responsible for making decisions and managing the process need to be trained to equip them for their roles and responsibilities, and systems need to be adapted to deal with new challenges such as the increasing amounts of data being generated and needing to be taken into account when assessing the validity and soundness of work and the conclusions being drawn.
Peer review is currently a topic of vigorous debate, probably more so than at any time since its origins in the form we know it today nearly 300 years ago. Views are often polarized, with some people considering peer review to be ‘broken’ and wanting to abolish it completely, others still viewing it as an important part of the scholarly publishing process. Two large international surveys [1-3] have found that although researchers value peer review considerably, there is a level of dissatisfaction: however, they generally want to see peer review improved, not replaced. Interestingly, the Taylor & Francis Open Access Survey (March 2013, with more than 14,000 respondents) [4] found that rigorous peer review was the service authors rated the most important when asked about the importance of services they expect to receive when paying to have their papers published open access. This was rated as more important than either rapid publication or rapid peer review.
What do we mean by ‘peer review’? Put simply, ‘peer review in scholarly publishing is the process by which research output is subjected to scrutiny and critical assessment by individuals who are experts in those areas’ [5], traditionally taking place before publication. This can be achieved in a number of ways, but the basis of all is ‘scrutiny and critical assessment by experts’. The scale of the scholarly publishing enterprise is enormous, with around 28,100 active peer-reviewed journals publishing around 1.7 to 1.8 million articles annually [6]. For those published articles alone, there have probably been around 4 million reviews done. But as a certain proportion will have been submitted to and rejected from one or more other journals before being accepted for publication, the true number is likely to be considerably greater. It has been estimated that, considering just 12,000 of the active peer-reviewed journals, around 15 million hours annually are spent reviewing manuscripts that are rejected [7].
Scholarly publishing is going through a period of dramatic change and facing considerable challenges. New publication models are being introduced, and new players are entering the field. Peer review is, in parallel, experiencing similar issues, undergoing both disruption and innovation. Support for open access journal publishing is growing, with many considering that it is no longer a question of ‘if’, but ‘when and how’. Despite its entry into the mainstream and adoption as a requirement by some research funders for the publication of work they have funded (e.g., the Wellcome Trust and the Research Councils UK), a number of misconceptions remain, particularly that peer review in open access journals is in some way inferior to that in traditional subscription journals. Generalizations about peer-review quality and access/business models can’t be made. Publishing models with article processing charges have, however, presented opportunities for exploitation for profit by questionable journals and publishers who offer very little, if anything, in the way of peer review [8]. The widespread introduction of an indicator such as the Journal Transparency Index suggested by Marcus and Oransky [9] would help bring much-needed transparency, and aid authors who are looking for reputable journals in which to publish. All journals should describe their editorial structure and peer-review processes, even if they don’t include all the things suggested by Marcus and Oransky [9].
Criticisms of peer review have been around for a long time. These range from grumbles by individual researchers when they have bad experiences (and all will, inevitably, at some stage of their careers) to more widespread general complaints, for example that peer review is inconsistent and prone to bias, slow and expensive, open to abuse, and largely a lottery [10,11]. Peer review can at times ‘fail’ or get mired in a series of escalating problems in even the best-run journals, but it should be a prime aim of journals to have their communities basically happy with the services they provide.
Sometimes the criticism is made that reviewers are ‘working for free’. Peer review is, however, a reciprocal process, as authors and reviewers are mostly the same community, and so researchers benefit from expert reviews as well as provide them. Fairness in the system does, however, rely on everyone doing their fair share of reviewing. Editors can’t do much about this in the wider journal ecosystem, but they can ensure that at their own journals there is a good balance between submitting and reviewing manuscripts.
Being labelled as ‘peer reviewed’ doesn’t mean that the work reported can be considered the absolute ‘truth’ and free of all errors. It means that the report has been looked at and critically assessed by appropriate experts, i.e., people with the relevant expertise and without any conflicting interests that might bias their assessment, hopefully to the best of their ability, and considered suitable for publication. Before publication, authors have usually been asked to address deficiencies, explain discrepancies and clarify any ambiguities, so papers (and the work behind them) get improved as a result. Peer review is, however, only as good and effective as the people managing the process.
Experienced and knowledgeable editors and editorial staff bring subtlety and sophistication to the endeavour, coupled with impartiality and common sense. Bad or inexperienced editors and staff can cause distress and anger, and bring the system into disrepute.
What, realistically, can we expect of peer review? Ideally, it should provide the following [5,12]:
  • 1. Prevent the publication of bad work - filter out studies that have been poorly conceived, designed or executed;

  • 2. Check (as far as possible from the submitted material) that the research reported has been carried out well and there are no flaws in the design or methodology;

  • 3. Ensure that the work is reported correctly and unambiguously, complying with reporting guidelines where appropriate, with acknowledgement to the existing body of work and due credit given to the findings and ideas of others;

  • 4. Ensure that the results presented have been interpreted correctly and all possible interpretations considered;

  • 5. Ensure that the results are not too preliminary or too speculative, but at the same time not block innovative new research and theories;

  • 6. Provide editors with evidence to make judgements as to whether articles meet the selection criteria for their particular publications, for example on the level of general interest, novelty or potential impact;

  • 7. Provide authors with quality and constructive feedback ;

  • 8. Generally improve the quality and readability of articles;

  • 9. Help maintain the integrity of the scholarly record.

It is not the role of journals to police research integrity or determine if misconduct has occurred, but editors do have a duty to look into all allegations or suspicions of misconduct. If they find grounds for these, they should refer cases to the individuals’ institutions for investigation. The Committee on Publication Ethics (COPE ; provides guidance and resources for handling cases of suspected misconduct, including a set of flowcharts that cover many of the common situations editors come across.
Editors play a critical role in the peer-review process and in the level of community satisfaction with that process. When they fall short of what is expected of a good editor, dissatisfaction results and complaints st0art to come in. Dissatisfaction may also be voiced on blogs and social media postings, along with specific details, perhaps even the reviewers’ reports and editorial correspondence. The scale and extent of the criticism can grow quickly as people find they are not alone in their criticisms and negative experiences. One of the most common criticisms is that some editors are not making the critical judgements that are needed on reviewers’ reports, leading to authors being asked to include unrealistic numbers of additional experiments as a condition of acceptance. This has been referred to as the ‘tyranny of reviewer experiments’ [13]. The following comment from a senior and well-respected researcher [14] summarizes well how some researchers feel:
‘Unfortunately, all too often editors relinquish their responsibilities and treat the peer review process as a vote, but this is a distortion of the real function of peer review, which should be to offer advice to the editor and the author... I do think the real problem is editors... Increasingly, one sees editors who don’t use any judgement at all, but just keep going back to reviewers until there is agreement.’
Being a good editor means doing more than just moving manuscripts automatically through peer review, and more than just ‘counting votes’. It also means not passing responsibilities on to reviewers that are the editor’s. Good editors and their editorial boards and staff screen submissions to make sure they are actually within the scope of the journal and that the standard of language is of sufficient quality for the work to be understood. Reviewers are right to complain and get frustrated when pre-review screening seems inadequate, and they feel they are doing the editor’s job. Editors have to act as editors, making critical judgements based on the reviews and recommendations of reviewers chosen to help them make decisions on manuscripts (reviewers advise, editors make the decisions), and always able to put forward the reasons behind their decisions.
Editors are, in the main, active researchers, applying for grants, submitting their own work for publication, and competing for jobs with others in their communities. They may also have ties with industry and other commercial bodies. It is therefore inevitable that they will sometimes find themselves in situations that may conflict with the responsibilities of their editor role. It is essential these are recognized, disclosed, and handled appropriately. Editors should not be involved in the handling and decision making of any manuscripts where they have, or may be perceived to have, potentially conflicting interests. All manuscripts that editors submit to their own journals should be handled by another member of the editorial board, and all details of their handling and review should be kept confidential from them. The COPE code of conduct for journal editors [15] provides guidance on the minimum standards to which all COPE members are expected to adhere, and these are a useful summary of good practice for all editors.
One of the reasons there are criticisms of peer review is that standards are very variable. The processes at some journals leave a lot to be desired, others have problems achieving consistency in decision making, and some have questionable practices. There are not only good editors and bad editors, but also inexperienced ones and those who may have been in position for a considerable time but who still don’t know what good and ethical editorial practice is. Surprisingly, especially considering the importance of publication records for researchers, careers, many editors don’t receive any training before taking up their roles. They are thrust into them without being equipped for the responsibilities.
Peer review relies on trust and operates under the assumption that everyone is behaving honestly. Problems arise when questionable or unethical behaviour occurs and there is a breakdown in that trust. All the parties involved in peer review - authors, reviewers and editors - are open to misbehaviour, along the whole spectrum, from questionable actions and bias through to what can be classified as misconduct. New practices come along that can surprise even the most editorially experienced individuals. For example, the cases of ‘fake’ reviewers and ‘fake’ reviews that surfaced in 2012 [16] (and see the ‘faked emails’ category on the Retraction Watch blog, The authors provided journals with suggested reviewers for their manuscripts who either didn’t exist (they were false identities), along with email addresses that were their own accounts or those of colleagues, or were real people but were accompanied by email addresses to accounts that they had created for them, and which had nothing to do with those people. They then returned reviews for their own manuscripts via these accounts. These cases were eventually found out and the papers retracted, accompanied by notes that the peer-review process for the articles had been found to have been compromised and inappropriately influenced by the corresponding authors, and so the findings and conclusions could not therefore be relied upon. What is of concern is that it became apparent over 2012 that a large number of cases were involved (at different journals, in different disciplines, and from different publishers), with 28 articles having to be retracted from one author alone [16].
How could this happen, and to this extent? One has to question whether the basic checks were done to confirm identity, contact details and reviewer suitability before reviewers were sent manuscripts to review. There were also suggestions that some journals had used only author-suggested reviewers, which shouldn’t generally happen. When cases like this are exposed, questions inevitably arise about the value and rigour of peer review, and confidence in it is dented. Partly as a response to this sort of behaviour, COPE has produced the Ethical guidelines for peer reviewers [17,18] to help set out ethical standards and guidelines for reviewers. Besides providing guidance for reviewers, journals and editors, it is hoped they will be used as an educational resource in the training of researchers, who often come to the reviewer role without guidance on peer review.
Despite criticisms about its failings, many feel that peer review-the opinion of experts-will always be important in assessing the outputs of research. The UK House of Commons Science and Technology Committee inquiry into peer review (paragraph 277 [19]) concluded that progress in science relies on being able to build on robust and accurate previous work, and that: ‘Peer review in scholarly publishing, in one form or another, is crucial to the reputation and reliability of scientific research’ (my italics). There are different ways to get expert opinion, and new models are exploring possibilities and bringing innovation to peer review [20] and the dissemination and publication of research output (moving beyond just journal articles). Whatever the model, there is a need to ensure that standards are high, editors (or those responsible for making decisions and managing the process) are trained and supported, and researchers are educated in research integrity and publication ethics. Peer review is also facing new challenges as large amounts of data are being generated and needing to be reviewed or viewed with research reports. New standards and workflows are needed. Where to put data during review for confidential access is an issue, but there are organizations such as Dryad ( where data can be made securely and confidentially available for peer review.

No potential conflict of interest relevant to this article was reported.

Figure & Data



    Citations to this article as recorded by  
    • The challenge of recruiting peer reviewers from one medical journal’s perspective
      Christopher J. Peterson, Cynthia Orticio, Kenneth Nugent
      Baylor University Medical Center Proceedings.2022; 35(3): 394.     CrossRef
    • Effective Peer Review: Who, Where, or What?
      Russell P. Hall
      JID Innovations.2022; 2(6): 100162.     CrossRef
    • JID Innovations and Peer Review
      Russell P. Hall
      JID Innovations.2021; 1(3): 100056.     CrossRef
    • Enhancing reproducibility: Failures from Reproducibility Initiatives underline core challenges
      Kevin Mullane, Michael Williams
      Biochemical Pharmacology.2017; 138: 7.     CrossRef
    • Survey on open peer review: Attitudes and experience amongst editors, authors and reviewers
      Tony Ross-Hellauer, Arvid Deppe, Birgit Schmidt, Jelte M. Wicherts
      PLOS ONE.2017; 12(12): e0189311.     CrossRef
    • Editing and publishing scholarly journals in the internet age
      Kihong Kim
      Science Editing.2014; 1(1): 2.     CrossRef
    • The big picture: scholarly publishing trends 2014
      Pippa Smart
      Science Editing.2014; 1(2): 52.     CrossRef

    Science Editing : Science Editing