Peer Review, Part 3: A Taxonomy of Bad Papers

Reviewing is great when you get a good paper where you can make some suggestions to make it even better, and everybody’s happy. Bad papers are much less fun, but they are also much more common. Here are some examples I’ve seen and that I keep seeing.


  • The completely insane. I once got a paper to review that was two pages long, with the second page not even used completely. There was a table on the second page, and the rest was completely unintelligible writing. Tamara Munzner likes to refer to such papers as “written in crayon” – a mental image I find particularly amusing. The good thing about these is that they don’t usually take much time to spot and reject.
  • The deeply flawed study. These are becoming more and more common as more people are doing studies (user studies, perceptual studies, etc.). While more studies are a good thing, they need to be done with care. I’ve seen quite a few papers lately where the study design is terrible, or where the study doesn’t actually prove the claim.
  • The marketing fluff piece. There are actually several variants of this. One is essentially a marketing piece about some piece of software that is disguised (often badly) as a systems or application paper. There’s nothing wrong with a good application paper (in fact, there should be more), but there needs to be content and a contribution. Also, the language needs to be a bit more academic and a bit less over the top.
  • The complete lack of details fluff piece. Another variation of this is the one where everything is written with such a complete lack of any specific details, that it’s impossible to know if any of it is real or not. Like a wet bar of soap, the paper slips through your fingers and won’t let you get a clear grasp on what it’s trying to tell you. These can be maddening, because they make you feel like you’re too dumb to get them, when it’s really the paper’s fault. Authors sometimes don’t want to be too specific when they’re not sure or they’re afraid that they’ve made a bad decision – but trying to paper over these issues just makes them a bigger target for reviewers.
  • The bait-and-switch. Make promises in the title, abstract, and introduction that you then don’t end up keeping: that’s a major no-no, and a guarantee for bad reviews. You’re setting expectations, and you get people excited. If you then don’t deliver, you really only have yourself to blame for getting rejected. Things change as a paper gets written and evolves, but that needs to be reflected throughout the paper.
  • The reinvention of the wheel. This happens more than you’d think. The Quilt Plots paper is just a particularly egregious example, buy there are many more. Sometimes, people get carried away working on something without realizing that it has been done, or the project changes direction in a way that leads them down the same path somebody else has gone before. And sometimes, people think they have invented something new for a field they’re not familiar with, and want to publish that work there. But none of these reasons make the work acceptable.
  • The math graveyard. This is less common in information visualization, though it does happen: a paper filled with lots of gratuitous math that not only doesn’t help explain much, but makes the whole thing much harder to read. There is a lot more math in scientific visualization papers, and there it is also usually more justified. Sometimes, however, authors think they will look smart if everything is expressed in equations and mathematical symbols. That doesn’t work though, people see through that and will call you out on it.
  • The good idea. I find myself writing lots of reviews that say: good idea, bad paper! Many ideas start with good questions, but then veer off somewhere insane or useless or the study doesn’t work, etc. These are the most disappointing reviews to write, but they are also incredibly common. Take a break from paper writing, get some fresh air, and then look at your paper as if it were written by somebody else: does it do what you think it does?

Some of these are easier to avoid than others. I totally get the bait-and-switch, for example: you’re trying to do something big and awesome, but you just didn’t get it done in time, or your study didn’t give you the results you had hoped for. But then you need to go back and make sure the different parts of your paper fit together. Or do more work.

But the key is that bad papers are not usually the result of malice, but are often the result of a lack of judgment and quality control. There are also good reasons to submit bad papers, which I will get to in the next installment.

This is part of a five-part series on peer review in visualization. One posting a day will be posted throughout this week.

Teaser image by Troy Tolley, used under Creative Commons.

Published by

Robert Kosara

Robert Kosara is Senior Research Scientist at Tableau Software, and formerly Associate Professor of Computer Science. His research focus is the communication of data using visualization. In addition to blogging, Robert also runs and tweets.

4 thoughts on “Peer Review, Part 3: A Taxonomy of Bad Papers”

  1. Excellent post! This really sums up all the major failings of vendor-sponsored white papers too. The difference is that instead of having an academic paper rejected by a peer review, with a white paper you turn off all the prospects that you might have engaged. I’ve re-Tweeted and posted this post to my community of white paper writers. Great job!

  2. Good job at analyzing the types of bad paper!

    A similar category to “The complete lack of details fluff piece” is the lack of clear message / lessons learned.
    Also, another category I often encounter is the “Perfect paper” that has no discussion of limitations.

    With regard to visualization papers, I want to point the readers to an aritcle on common pitfalls
    ” Process and Pitfalls in Writing Information Visualization Research Papers”
    Fully understanding the pitfalls needs some background on the five types of visualization paper. Here are short details about that:
    From EuroVis guidelines: http://eurovis.swansea.ac.uk/call-for-participation-paper-types.htm
    From IEEE VIS guidelines: http://ieeevis.org/year/2014/info/call-participation/paper-submission-guidelines#PaperTypes

  3. Robert,

    There are resources out there, as you pointed out, for examples of Bad visualizations, and for Good. How about a resource, maybe a wiki, with a repository of visualization related papers?


    1. We mostly tend to be cautious about publicly poo-pooing other people’s published papers. It would be interesting if people were willing to (anonymously) submit their trainwrecks somewhere, but I’m not sure that would work. A good way to get a sense of bad papers is to review for some conferences, though.

Leave a Reply