Before the recent news about fraudulent scientific publications, a group of researchers carried out an “experiment” consisting of the publishing “fake” studies in prestigious scientific journals. In this way, these researchers were trying to challenge the classic system of “peer review” (1).
The results of the “experiment” did not invalidate the “peer review” procedure, which continues to offer guarantees, but revealed inherent weaknesses within the system which have often been overlooked.
The first problem is work overload: according to some estimates, every year about two and a half million papers are reviewed and published around the world, although a far greater number are reviewed but subsequently rejected for publication.
The second problem regards the suitability of the reviewers. Although they might be considered experts in their scientific fields and the works they review, they are often unqualified to carry out this task. In addition, this work is unpaid, and so diligence and excellence is often a secondary consideration.
The third drawback is the inconsistency of the method: an article (2) published in 2010 describes a study where two researchers selected 12 articles that had been accepted for publication in “high impact” journals. They falsified the names and academic merits of the authors and submitted the papers to the same journal that had previously accepted the originals. These anomalies (in this case fraud) were only observed in 8% of the papers. Of the 9 articles that continued to be evaluated (via peer review), 8 were rejected (89%), when in a previous review they had been accepted.
The fourth problem with “peer reviews” is their inhibiting effect on innovation. An agreement/arrangement with the reviewer is usually necessary in order for the text to be accepted for publication. This procedure is especially negative when the content of the work contravenes accepted theories. The scientific community is, on many occasions, excessively conservative and reluctant to accept ideas which break with traditional beliefs. In 2015, a study published in P.N.A.S. (3) examined more than 1,000 publications which had been sent to three prestigious medical journals. Of the 808 articles published, many had been rejected previously by the editors of these same journals.
The fifth problem is perhaps one of the most important: the criterion of the reviewers shows a definite bias depending to the names of the the authors, their academic origin or their sex! Taking gender into account, a study, entitled “the Matilda Effect”, published in eLife in 2017, (4) in which researchers created a database composed of more than 9,000 publishers, around 43,000 reviewers, and some 126,000 authors of approximately 41,000 articles published in 142 journals from various scientific fields, it was revealed that only 26% of the editors, 28% of the reviewers and 37% of the authors were women.
Focussing on the specific field of Earth and Space Science, only a quarter of the reviewers, and a fifth of the authors were women, although the degree of acceptance of their work was greater than that of their male colleagues.
In the journal Nature (5), in 2011, of the 5,500 reviewers who worked for the magazine, women only accounted for 14%, and 18% of the 34 researchers, and were only the main authors of 19% of the articles published in the Comment and World View section. Maybe for sociological reasons, many women are reluctant to be reviewers. However, various situation analyses have documented (6) that many male editors tend to favour male reviewers. During 2018, Nature increased the participation of women in the Comment and World View section to 34%, while female reviewers were only swelled to 16%.
However, there are also arguments in favour of the “peer review” system. In 1994, a paper containing the results of several studies from both before and after the implementation of the “peer review” method was published in the Annals of Internal Medicine (7). The authors employed an assessment tool which showed that of the 34 works evaluated, the “peer review” improved the quality of assessment in all, except 1 study. The following improvements were made: the discussion of study limitations, a reduction in generalizations, the use of intervals of confidence, and greater diligence in the writing of conclusions. These improvements were notable when the system of “peer review” was introduced.
As mentioned before, the reviewer should have to be compensated economically, or at least improve his/her CV. However, there are those who consider that this procedure could distort the objectivity of the evaluation. The authors of this article do not have a defined position with respect to these issues.
Seeking to raise ethical standards, some magazines have begun to carry out “blind” reviews, in the manner of a literary contest. However, in many cases this is not possible because the texts contain too many clues and information as to the identity of the authors.
Another way to detect problems in the quality of research is to allow online publication before the printed edition. This system is barely employed in the biomedical field, unlike other sciences such as physics. In addition, many scientific journals could have serious economic problems to maintain their publication in paper format.
An improvement of the process, previous and subsequent, associated with the publication requires a paradigm shift. It is accepted that an article subject to “peer review”, and accepted, is true irrefutable. This is so in the light of current knowledge, but it may not be so in the future. We must always keep that possibility in mind. Each new discovery can consolidate prior knowledge, or question it.
The “peer review”, despite the limitations mentioned at the beginning of the article, continues to be the best way to evaluate the quality of scientific publications. However, it should be considered as a stage of scientific progress, not as an unquestionable seal of truth and quality.
- Uncovering new peer review problems. In: Healthnewsreview.org. Consult: November 2018.
- Peters D.P., Ceci S.J. Peer-review practices of psychological journals: The fate of published articles, submitted again. Behavioural and Brain Sciences 1982; 5(2): 187-195. Published online 01 February 2010.
- Siler K., et al. The measuring the effectiveness of scientific gatekeeping. Proceedings of the National Academy of Sciences (P.N.A.S.) 2015; 112(2): 360-365.
- Knobloch-Westerwick C.J., Glynn M. Huge. The Matilda effect in science communication, an experiment on gender bias in publication quality perceptions and collaboration interest. Science Communication 2013; 35: 603-625.
- Nature’s sexism (Editorial) 2012; 491: 495 (21 November 2012).
- Gilbert J.R., et al. Is There Gender Bias in JAMA’s Peer-Review Process? JAMA 1994; 272(2):139-142.
- Goodman S.N., et al. Manuscript Quality before and after Peer-Review and Editing at Annals of Internal Medicine. Ann Intern Med 1994; 121(1): 11-21.
Zaragoza (Spain), November 2018
López-Tricas, JM MD