It’s Time to Improve the Scientific Paper Review Process — But How?

Synced
4 min readApr 10, 2020

Head image courtesy Getty Images

The level-headed evaluation of submitted research by other experts in the field is what grants scientific journals and academic conferences their respected positions. Peer review determines which papers get published, and that in turn can determine which academic theories are promoted, which projects are funded, and which awards are won.

In recent years however peer review processes have come under fire — especially from the machine learning community — with complaints of long delays, inconsistent standards and unqualified reviewers.

A new paper proposes replacing peer review with a novel State-Of-the-Art Review (SOAR) system, “a neoteric reviewing pipeline that serves as a ‘plug-and-play’ replacement for peer review.”

SOAR improves scaling, consistency and efficiency and can be easily implemented as a plugin to score papers and offer a direct read/don’t read recommendation. The team explain that SOAR evaluates a paper’s efficacy and novelty by calculating the total occurrences in the manuscript of the terms “state-of-the-art” and “novel.”

If only a solution were that simple — but yes, SOAR was an April Fool’s prank.

The paper was a product of SIGBOVIK 2020, a yearly satire event of the “Association for Computational Heresy” and Carnegie Mellon University that presents humorous fake research in computer science. Previous studies have included Denotational Semantics of Pidgin and Creole, Artificial Stupidity, Elbow Macaroni, Rasterized Love Triangles, and Operational Semantics of Chevy Tahoes.

Seriously though, since 1998 the volume of AI papers in peer-reviewed journals has grown by more than 300 percent, according to the AI Index 2019 Report. Meanwhile major AI conferences like NeurIPS, AAAI and CVPR are setting new paper submission records every year.

This has inevitably led to a shortage of qualified peer reviewers in the machine learning community. In a previous Synced story, CVPR 2019 and ICCV 2019 Area Chair Jia-Bin Huang introduced research that used deep learning to predict whether a paper should be accepted based solely on its visual appearance. He told Synced the idea of training a classifier to recognize good/bad papers has been around since 2010.

Huang knows that although his model achieves decent classification performance it is unlikely to ever be used in an actual conference. Such analysis and classification might however be helpful for junior authors when considering how to prepare for their paper submissions.

Turing awardee Yoshua Bengio meanwhile believes the fundamental problem with today’s peer review process lies in a publish or perish paradigm that can sacrifice paper depth and quality in favour of speedy publication.

Bengio blogged on the topic earlier this year, proposing a rethink of the overall publication process in the field of machine learning, “with reviewing being a crucial element” to safeguard research culture amid the field’s exponential growth in size.

Machine learning has almost completely switched to a conference publication model, Bengio wrote, and “we go from one deadline to the next every two months.” In the lead-up to conference submission deadlines, many papers are rushed and things are not checked properly. The race to get more papers out — especially as first or co-first author — can also be crushing and counterproductive. Bengio is strongly urging the community to take a step back, think deeply, verify things carefully, etc.

Bengio says he has been thinking of a potentially different publication model for ML, where papers are first submitted to a fast turnaround journal such as the Journal of Machine Learning Research for example, and then conference program committees select the papers they like from the list of accepted and reviewed (scored) papers.

Conferences have played a central role in ML, as they can speed up the research cycle, enable interactions between researchers, and generate a fast turnaround of ideas. And peer-reviewed journals have for decades been the backbone of the broader scientific research community. But with the growing popularity of preprint servers like arXiv and upcoming ML conferences going digital due to the COVID-19 pandemic, this may be the time to rethink, redesign and reboot the ML paper review and publication process.

Journalist: Yuan Yuan & Editor: Michael Sarazen

Thinking of contributing to Synced Review? Synced’s new column Share My Research welcomes scholars to share their own research breakthroughs with global AI enthusiasts.

We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

Need a comprehensive review of the past, present and future of modern AI research development? Trends of AI Technology Development Report is out!

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.

--

--

Synced

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global