How to fix the academic peer review system
And the crux of the solution is that a hundred thousand journals need to die
Peer review has long been a holy cow in the academic publication process. Before new research is published, academic journals send the work to other experts in the field (peers), who may advise against publication, or (usually) demand revisions before recommending publication. The idea of peer review is to hold academicsâ feet to the proverbial fire, ensuring that we publish only work of reasonable quality.
But in fact, what the global academy has been clinging to for more than a century is largely anonymous, off-the-record, pre-publication peer review. Every academic knows the frustration of trying to satisfy or rebut reviewers who contradict each other or who demand revisions that miss the point and dilute the results.
The original intention and lifeblood of peer review â opening the doors of scholarly journals beyond the old boysâ clubs â has been squeezed out by the forces of over-commitment, financial gain, careerism and raw jealousy.
Some might argue that in an era of fake news and dubious âscientific findingsâ, a vibrant pre-publication editorial process is crucial. Wonât it provide some assurance that what we read is likely to be scientifically and ethically sound, and that it draws valid conclusions?
Indeed. But the pretence that peer review ensures this has led to a once unimaginable proliferation of âscholarlyâ journals. These journals require little more than an online content management system to create. âPeer-reviewedâ journals are no longer meaningful filters.
Most academics donât seriously âreadâ journals to keep abreast of developments in their field. At most, one might read a few wide-ranging journals, hoping to stumble across interesting ideas outside oneâs own narrow speciality. Instead, search engines are used to find relevant articles; academics then use their own sifting processes to decide which ones to take seriously.
There is an expanding field of bibliometrics, which uses statistical indicators, like frequency and forum of citation, to give clues to whose work is making an impact. In any broad field (physics, mathematics, biomedical research, social science) or niche area (semiconductor materials, HIV surveillance, etc.) there is probably room for a handful of serious journals, e.g. Nature and Science, that people would actually read.
Beyond that, we may as well skip ahead to the inevitable outcome: a world in which the default route for academic work is self-publication on research repositories. Physicists and mathematicians are already routinely posting their work on arXiv, one of the more prominent scholarly repositories, before submitting it to peer-reviewed journals. Around these repositories vibrant communities emerge, characterised by serious engagement with new ideas and results, the pointing out of flaws and the identification of valuable work.
These repositories conduct no prior peer review. Editorial intervention is limited to checking that the work appears to report research findings and that it has been correctly categorised by field and subfield. After being posted, the papers are subjected to the usual critical reading and commentary by other scientists in the field. We can think of this as on-the-record, open and robust post-publication peer review, and it is much better than commentary from unaccountable and anonymous reviewers, free to ride their hobby horses or shoot down the work of their competitors.
Nothing would stop the few journals that survive from embracing the self-publication model, and haggling with the authors whose work they wish to include. They can do so on the strength of post-publication peer review, or on any ad-hoc basis they wish, to create curated collections of important work. We should quietly allow the other hundred thousand or so journals to die.
In the present system we feel obliged, having committed time and energy to journal-specific formatting, including arbitrary length constraints (often for costless online space), and the whole process of waiting for initial reviews, to tough it out and see the process through by investing even more time and energy. We âgratefullyâ take on board the reviewersâ comments to the limit of our pain threshold, and politely rebut only the truly intolerable â rather than cut our losses and walk away.
In many fields with high stakes, big budgets and serious impact, there are very few articles written by one or two lone gun authors. If groups of authors arenât capable of critical self-appraisal before putting out their work, we probably need not bother to read it. Small groups and single authors can surely solicit feedback from colleagues, or revise their work following comments received after posting the article online.
The primary archives all allow formal revisions without undue complications. Once a work is âdroppedâ into the public domain, you have to be savvy about how you circulate notice of its existence, but eventually the cream will rise to the top.
Funders and academic employers, groaning under the weight of the modern knowledge edifice, place great emphasis on such metrics as how many âpeer reviewedâ papers you have published. But increasingly they also consider more nuanced evaluations of where you have published, how many citations you receive and from whom. Well-designed self-publication forums can (and do) provide metrics that are just as useful for judging quality and impact.
Deep down we all know where this is headed. Once an article comes up on Google Scholar, Pubmed, or the arXiv alerts, we consider who wrote it, what it is about, etc., in order to decide whether to read any further. Where it was published is still a consideration, but only one of many.
And so we come back to the conclusion that only a tiny number of serious journals deserves to survive the coming collapse.
Pre-publication peer review, then, is a relic from another era when it served to expand the pool of authors. Today it is a needless detour on the road to producing meaningful research and creatively disseminating our findings, free from the shackles of journal styles and our competitorsâ whims.
Thanks to technological advancement, we all have access to the tools we need to create beautiful expositions, demonstrations and presentations. And we have all the tools to make post-publication peer review â the heart and soul of scientific conversation â richer and more efficient.
Views expressed are not necessarily GroundUpâs.
Next: City to report to court on future of Woodstock residents
Previous: Eastern Cape learners return to class after three-day shutdown
Letters
Dear Editor
Absolutely agree: the proliferation of journals of no repute and probably no integrity means that reference to pre-publication review increasingly means little. The fact that many of these journals (along with some more established journals) require authors to pay for publication raises further questions about what peer review means, since publication is ultimately dependent not on the quality of the article, but on whether the author can afford to pay.
My own experience has been instructive. Since having some articles published in international journals over a period of year, I regularly receive (spam) requests, often from newly established journals with impressive titles, but no track record. Many of these requests relate to fields in which I have no expertise and no publication record - I'm a psychologist and have been asked, for example, by journals dealing with surgery, or health administration to contribute articles or become a peer reviewer or join an editorial board. Enough said!
Dear Editor
The incentive structure now militates against cooperation with peers. This is most unfortunate for people working in urgent areas like sustainable development of cities. We don't want to share contacts or information about valuable role models that should be disseminated as rapidly as possible. Instead, the information is held back as it will make our paper more unique and perhaps give us an edge in getting scarce research grants.
Dear Editor
The authors clearly donât like peer review. Why? Because peers can be âjealous old boysâ hiding behind anonymity; and the process is âfrustratingâ, âcontradictoryâ, âmisses-the-pointâ and âresult-dilutingâ and no longer ensures that published work is âof reasonable qualityâ.
They conclude (without evidence) that:
1. âPeer-reviewedâ journals are no longer meaningful filters.â and
2. âMost academics donât seriously âreadâ journals to keep abreast of developments in their field.â
Peers can be nasty. But most researchers welcome comment-debate-reviews by/with peers when/wherever possible. This is because they help to sharpen thinking. When reviewers misbehave, editors ignore/replace them. If reviewers/editors donât do their jobs, journals lose reputation; become repositories for incompetent researchers; and donât get read.
For my papers, I choose the âtoughestâ journal. Publishing in Nature/Science is the âgolden ringâ, with top discipline-related journals being âsilverâ and local ones âbronzeâ. Thatâs how one rises in the research hierarchy.
Also, I read 20+ disciplined-based journals. Without this, researchers become mired in the mundane past and interact only with âfrustration-contradiction-freeâ âold boysâ with whom they concur.
Whatâs the authorsâ alternative? âSelf-publishâ with academically complementary (complimentary?) co-authors âcapable of critical self-appraisalâ and deposit manuscripts in âresearch repositoriesâ; allowing âserious engagementâ (with peers?) to discover flaws etc.
In this internet era, it's wiser for researchers to circulate their findings to respected ârealâ peers to sort such things out before trying to publish. Thatâs what a paperâs Acknowledgements section is for. The authorsâ alternative simply side-steps editors and valid challenges/contradiction from reviewers who they âfearâ.
Also, it creates the need to search a massive proliferation of ârepositoriesâ potentially packed with ill-conceived manuscripts full of âfake news and dubious scientific findingsâ and needing more work.
Dear Editor
I have great sympathy for the views expressed by Welte and Grebe (full disclosure: they are both friends and colleagues). What worries me is that rewards and jobs now depend on the number of papers one publishes in high-ranking journals creating an artificial 'currency' of scientific value.
The hope was, and to some degree still is, that journals such as Science, Nature, PNAS, The New England Journal and the Lancet would be rigorously reviewed and could be relied on to reflect sound science. And yet the Lancet took 12 years before they retracted the Wakefield paper on vaccines and autism during which time damage was done. So I am now of an age where I can afford to publish mainly in arXiv and bioRxiv but for my younger colleagues you will have to go on playing the game and getting your papers into the top journals.
© 2017 GroundUp.
This article is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.
You may republish this article, so long as you credit the authors and GroundUp, and do not change the text. Please include a link back to the original article.