Peer Review, Part 5: The Importance of Gatekeepers

The purpose of peer review is to separate the wheat from the chaff, the good from the bad, the brilliant from the clinically insane – you get the picture. But why? Why filter and not just let anybody publish whatever they want?

Gate

Why Gatekeepers? And Why Gates?

In the old days, there was the resource argument. A journal that’s published six times a year with so many pages per issue can only print some limited number of papers. The same is true for a conference: there are only so many sessions in which to present, and printing the proceedings also puts limits on the number of papers that can be included.

In the new era of electronic proceedings, online libraries, etc., none of these is true anymore. There are no limits to paper length because it doesn’t matter if your paper is 10 pages long or 100. It also doesn’t matter if a journal issue contains 10 papers or 100. So why insist on limiting the number of papers?

The answer is simple: time. It’s the only resource we can’t make more of, and it is the one that limits what we can possibly consume. Being able to find work that has been vetted, rather than having to vet it all yourself, is hugely valuable. You can now trust that the work has at least a minimum quality level, and is not just a video of somebody’s sleeping cat.

If you want to see what kinds of stuff you end up getting when there is no gatekeeping, just take a stroll around YouTube. Most of the content there is simply awful, pointless dreck. New stuff is also added at a rate that ensures that you could never possibly watch it all, even if you wanted to. And while there is no formalized system for it, most people (like me) don’t waste their time watching random videos, because they are mostly bad. Instead, we wait for the masses of users to find the ones that are worthwhile, and post those on Twitter and other social media. But the end result is the same: somebody has to wade through the flood of crap to find the few gold nuggets.

That is also what can make reviewing so frustrating. Your job as a reviewer is to weed out the bad 75–80% of papers, so the good 20–25% will be accepted. That means that for every one good paper, you will see three to four bad ones. But the result is that the papers in the journal or conference be of a much higher quality.

Alternatives

Today, anybody can publish whatever they want, at no cost. It’s called the web. It’s accessible and easy. So why bother with the gatekeepers and their walled gardens? The answer is, again, quality and time. If you just randomly search for things and don’t pick authoritative sources, you’re likely to end up with things that are not very good.

That is not to say that there isn’t great work out there that isn’t going through peer review. Bret Victor has never published an academic paper, and his work is amazing. But he is an exception. Work does not have to be vetted to be good. But if you’re looking for good work, you end up going to the places that rigorously select and filter.

The science world is looking for some sort of middle ground, too. There are places like arXiv.org, which provides a reasonably structured (though entirely non-reviewed) place to make papers available and discuss them. PLOS ONE, which the Quilt Plots paper was published in, has less stringent reviewing criteria and tends to err on the side of accepting more, rather than less. And there are certainly more.

The issue is not that there aren’t alternatives, it’s that they mostly only work in addition to the established journals, not really as actual alternatives. A paper that is only published on arXiv will not get a lot of attention (unless somebody trustworthy spots it and can make a strong case for it).

Does the Peer Review System Work?

Yes, it does. You can always complain about this paper or that paper getting in or not getting in. But overall, it certainly works. The visualization conferences are competitive enough to produce new and interesting work every year, yet not so insanely competitive that acceptance comes down to luck. There is a lot of variety, and people who have been publishing for a long time don’t automatically get their work accepted.

Reviewers don’t always spot problems, as Stephen Few helpfully points out. But there are few published papers with egregious errors in them. Sure, there should be more rigor in reviewing, and there should be ways of retracting papers when things go wrong. We haven’t had a big plagiarism or other ethics scandal in visualization, but I doubt that we have the mechanisms in place to get such papers out of the respective digital libraries if and when they happen.

If anything, reviewers in visualization (and computer science in general) are some of the harshest and finicky reviewers around. Maria Zemankova once said, “We are computer scientists, we’re trained to find bugs!” I also remember Robert Moorehead speaking at the opening or closing sessions of Vis/VisWeek a number of years ago and asking people not to be so “cut-throat” in their reviews. It sometimes seems like a miracle that anything gets published at all.

Wrapping Up

What I hope this little series has shown is that peer review is a complex process that has many things going for it. While the paper that got me to write over 4,000 words on the topic was clearly not one of the brighter spots in the history or peer review, it’s a good system overall. There are many ways a paper can be bad, but there are also good reasons why those get submitted. I’ve glanced over many issues and details in describing the process, and there are easily ten times as many words to be written about all the issues in academic publishing, especially outside of the mostly well-functioning technical sciences.

Ultimately, the quality of the papers that get accepted are the measure of whether the process works or not. And while I’m the first one to jump on a paper I don’t like, the visualization field as a whole is producing good work and moving forward at a steady pace. But it only does that thanks to a functioning peer review system.


This is the last part of a five-part series on peer review in visualization. One posting a day was posted throughout this week. One or two further parts may follow in the coming weeks.

Teaser image by Paolo Del Signore, used under Creative Commons.

14 responses to “Peer Review, Part 5: The Importance of Gatekeepers”

  1. Björn Brembs Avatar

    You write: “Your job as a reviewer is to weed out the bad 75–80% of papers, so the good 20–25% will be accepted.”

    Do you have any evidence to back these numbers up, or is this just an N=1 from the manuscripts you authored yourself?

    1. Robert Kosara Avatar

      Visualization venues typically have around a 25% acceptance rate, which is considered a good number. That means that the remaining 75% are not up to snuff. I appreciate the personal attack, though.

      1. Björn Brembs Avatar

        Sorry for the personal attack, but I just couldn’t resist the temptation to reply with the same arrogance you displayed in the post. Will not happen again.

        It seems either the reviewers at “visualization venues” are as arrogant as you, or visualization attracts an inordinate amount of incompetence. In my many years as academic editor handling peer-review of neurobiology papers for a variety of journals, I see non-fixable papers in the range of below 10%, everything else just needs some rewriting or an additional control here or there. That being said, ‘accept without revision’ is probably below 5%.

        Are people in your field really so bad compared to ours?

        1. Lars Juhl Jensen Avatar

          In that case, I think you people do an unusually good job in neurobiology before they submit. Based on what I see as editor and peer reviewer, the number of non-fixable manuscripts in bioinformatics is more in the range of 50%. By that I mean that about half of the manuscripts have such bad flaws that after fixing them, many of the analyses presented will have to be rerun, the conclusions will likely change, and large parts of the manuscript will consequently have to be rewritten.

          1. brembs Avatar

            I didn’t imply that the control experiments wouldn’t have possibly changed the statement of the papers, or the rephrasing entailed taking back some statements not backed up by data – but eventually, the huge majority of the papers I handle are published. The contribution each makes range from minor to breakthrough, but that’s how it is.

            The fraction of submissions I see that is so fundamentally flawed that I’d say “this is really bad” is negligible. In fact, what occurs much more often to me than bad submissions is papers in “top-journals” that either hype their findings by omitting previous work (out of malice or ignorance doesn’t matter), or only reach these findings because critical control experiments aren’t carried out (and the likely professional editor doesn’t have the competence to realize).

            In other words, what seems to happen more often is professional editor failure than submission of ‘bad’ manuscripts.

            Taking the evidence on journal rank:

            http://www.frontiersin.org/Human_Neuroscience/10.3389/fnhum.2013.00291/full

            and assuming that there is a positive correlation between journal rank and the chance of professional editors handling the peer-review, I’d say that if anything, the problem today is less scientists submitting 80% ‘bad’ science, that needs to be separated from the wheat and discarded, but professional editors selecting according to criteria which are not scientific, but economic (for their journal).

        2. Robert Kosara Avatar

          The 25% number was per conference, obviously many of those papers will get fixed and accepted eventually. I can’t put a number on how many papers end up published overall, but I don’t think it’s 90% – probably more like 75% or so, but I’m just guessing here.

          I don’t see how my posting is arrogant. The point I was trying to make is that selection and filtering is an important process. It can be hard to appreciate just how much junk there is, because we almost always see some filtered version of what is out there. That’s true for academic papers just as for YouTube, etc.

          1. brembs Avatar

            “The 25% number was per conference, obviously many of those papers will get fixed and accepted eventually.”

            Well, so peer review isn’t there to discard this work as chaff, but to improve it and correct it. Big, major difference!

            “I don’t see how my posting is arrogant.”

            If you summarily dismiss 80% of the work of your colleagues as being so bad it’s not worth publishing (you just doubled down and make it even worse by calling it “junk”) and at the same time imply that of course your work is all within the 20% good work (your indignation at my comment above supports that assumption), this is not only arrogant, it is also condescending. In essence, you just repeated that out of ten randomly chosen colleagues of yours, you think 8 ought to be fired, according to your own, much higher standards. If that isn’t arrogant, what is?

  2. randomwalk Avatar
    randomwalk

    Does peer-review work? Now this is a question that deserves more than a short paragraph… and the answer is more complicated than a simple yes.

    I have read MANY absolutely disposable papers in the visualization field that were published because of – presumably – the reputation of one of its authors. On the other hand, it is very difficult to get published as a newbie in the field because of barriers to entry, more precisely, not being part of the “visualization community” (aamof, this holds for the entire publication business). There of course is an explanation to this: peer review is volunteer work and, in the case of conferences, requires responding very quickly… which prevents in depth revisions and discussions on hypotheses, models, algorithms, etc. This has two consequences: well-known authors are trusted and can therefore publish whatever irrelevant thing crosses their mind…. while new topics / authors will have to prove the value of their work and – if the reviewing deadlines are too tight for this – will be systematically rejected. For very novel topics, I think this issue goes far beyond conferences: the history of science is plagued with scientific theories rejected (and even mocked) by the scientific community… ideas that turned out to be correct and change science as a whole. Two examples are the laser or Watson & Crick for the DNA, but there is a longer list here http://amasci.com/weird/vindac.html. Impressive isn’t it?

    Another thing: who are reviewers? It is volunteer work right? How do you control the quality of the revision and qualifications of the reviewer?

    I have always been and will remain skeptical about peer review… and claiming it works is indeed a bit arrogant… Or maybe it does because visualization is *not* science? I don’t know really… ;-)

  3. Zen Faulkes (@DoctorZen) Avatar

    So the more “bad” papers, the more we need gatekeepers. Except we don’t know what percent of manuscripts are “good” versus “bad” without some objective standard for “publishable science”, and that doesn’t exist. If there was, we wouldn’t need to peer-review at all. It could be done by a robot, or a simple checklist.

  4. Bilal Alsallakh Avatar

    Björn, looks like you do not know Robert well, and rather unaware of the visualization community.

    Robert is so casual and at conferences, even when he was a professor, anybody could hangs out and chat with him, or even he would come to you and just say, Hi!

    You just ask his former students about their very friendly experience with him.

    In the same time, Robert is frank. He will express his real opinion about some work without any sort of hypocrisy (many people would fear creating enemies by doing that).

    Also, he blogs and tweets about his own mistakes and rejections, I do not see how you infer he claims all his work is in the 20% category. Also, why should someone be fired for a rejected paper?

    IMO, making open, objective, and constructive critic to published work is very healthy, we need a lot more of that, that is how scientific ideas evolve to the better at the end.

    1. brembs Avatar

      No, I don’t know Robert at all, this is the first of his writings that I read and I’m a neurobiologist. I came here via Twitter where “the importance of gatekeepers” struck me as elitist and I wanted to see if my hunch was correct. So I’m completely unaware of the visualization community in general and of Robert in particular.

      Of course, in general, one can be very arrogant and still be nice and friendly with people whom one does not regard as producing junk or chaff, worth to be kept out of journals or conferences. Indeed, if you meet such a person at peer-reviewed conferences, or in their lab, by definition, you’re part of of the wheat 20% that the gatekeepers let in. So of course the cream of the crop are nice to each other!

      This, of course, is all general conjecture from this one single blog post! I should perhaps clarify (I think I may have not been perfectly consistent in this regard): it is this blog post that conveys arrogance and condescension. This does not imply that Robert is arrogant and condescending all the time or let alone that he necessarily must be a horrible person overall. I just think it would be worthwhile if he re-thought his opinion about the work of his colleagues, if he really thinks it is 80% junk. If it really is 80% junk, then to me visualization must be a horrible community, compared to the communities where I work. My neurobiology community is so diverse, for much of their work I wouldn’t even feel competent to judge even subjective quality beyond “I find this boring, but that’s just me”.

      “I do not see how you infer he claims all his work is in the 20% category”

      For one, it’s unusual to claim that 80% of one’s own work is junk, so I asked him in my comment. That he felt that this was a personal attack and didn’t answer my question in this regard, suggested to me that he does indeed not find that 80% of his own work is junk, but that 80% of his colleagues’ work is junk. For those two reasons only.

      “Also, why should someone be fired for a rejected paper?”

      Not for a single one of course! But if 80% of all papers are junk and there is some consistency (I assumed 100% consistency, probably not very realistic), this means that most of those junk papers come from junk labs. If 80% of visualization labs are junk, shouldn’t these labs be identified (perhaps by peer review?) and the people working there be fired? If you assume less consistency, the percentages change, obviously. The alternative is that everybody produces 80% junk, but I’ve already given you two reasons above why I don’t think this is what Robert intended to say here (and no, I did not read the other posts or the links in this post – at least not yet).

  5. Joel Cadwell Avatar

    “If you want to see what kinds of stuff you end up getting when there is no gatekeeping, just take a stroll around YouTube.” Or, why not use a search engine that makes individual recommendations based on personal history? Go ahead and make the movie. I don’t care how many bad movies are made as long as Netflix recommends only the ones I like. Let the user be the gatekeeper. Otherwise, humans will be humans, and gatekeepers will have their own interests and their personal reasons for letting some in and keeping others out. I prefer to replace the human gatekeeper with a statistical learning algorithm using my personal preferences and providing individualized recommendations.

    1. Antonio De Marinis Avatar

      “Let the user be the gatekeeper.” well said!
      In the current digital age, Google recommendation systems (based also on the notion of citations = inbound links) are much better than closed door and journal-owned and few peer-review people. Therefore that we have a lot of content with different depth and quality does not matter any more. Moreover depth and quality is also very subjective and contextual.

      Collective intelligence emerging from techniques like “social bookmarking” or “crowd sourcing processes” may give a more significant and interesting content for my specific needs, not the commercial needs and politics of journals.

      Journal papers are often also just sent as “manuscript” with very little way for the peer-reviewers to verify and reproduce the finding presented. Data behind the research is often not publicly available and the research is not verifiable and reproducible. There is also a statistical evidence that “Most Published Research Findings Are False” see the famous paper John Ioannidis, an epidemiologist from Stanford Universityhttp://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.0020124

      the Economist explains really well in this animation:
      http://www.economist.com/blogs/graphicdetail/2013/10/daily-chart-2
      source article http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble

      So to my mind, as an active reader (http://worrydream.com/ExplorableExplanations/), comes naturally the following questions: what is the point of peer-review at all? to filter out the “less False” from the “more False” research? Shouldn’t this task be left untouched and uncensored for the research field itself? What if all research was published with all data and verifiable as soon as it was made? What if anybody could peer-review and comment any paper publicly on Internet?

      Furthermore the classical peer review and traditional journal publishing process gives less space to “serendipitous exploration”. When you search on Internet or in any very large asset of unfiltered user-generated content you can be surprised by some content you would have never meet in journal-subscription.

      Having access to the full spectrum of human produced knowledge, quickly at any time, with no filter other than my search keywords, my curiosity and a pinch of random salt, is the main ingredient for an ever growing knowledgeable society.

  6. juliusbeezer Avatar
    juliusbeezer

    This is a useful discussion, and I think Kosara is to be congratulated on posing what might be called the attentional argument for peer review so clearly, even if it does turn out to be ultimately false. My own view is that scholarly information seeking habits have been so radically changed over the past 20 years that is hard to be confident that any element of the print-based system will necessarily be valuable in the age of ready access to networked computers, but, you know, old habits die hard.
    We also know, as randomwalk nicely illustrates, that traditional peer review is not good at identifying truly radical scientific breakthroughs. We know that it is hardly effective at detecting scientific malfeasance. We know it consumes a tremendous amount of reviewer time, and most of the time the reports they produce are simply buried–a tremendous waste. We know that ‘glamour journals’ sustain their 90%+ rejection rates as much as on a ‘sexiness’ criterion as on any of scientific rectitude. We know that junior colleagues fear the career consequences of criticising their seniors. We know their seniors, on occasion, abuse peer review by nicking ideas from unpublished manuscripts.
    And as the function that Kosara claims for it here, of correctly apportioning scientific attention, is among the easiest to imagine replacing with modern tools such as overlay journals and microblogging platforms like Twitter, I suspect it is misguided of him to defend the status quo on these grounds.