Skip to content

Lemons

Peer Review, Part 3: A Taxonomy of Bad Papers

Reviewing is great when you get a good paper where you can make some suggestions to make it even better, and everybody’s happy. Bad papers are much less fun, but they are also much more common. Here are some examples I’ve seen and that I keep seeing.

  • The completely insane. I once got a paper to review that was two pages long, with the second page not even used completely. There was a table on the second page, and the rest was completely unintelligible writing. Tamara Munzner likes to refer to such papers as “written in crayon” – a mental image I find particularly amusing. The good thing about these is that they don’t usually take much time to spot and reject.
  • The deeply flawed study. These are becoming more and more common as more people are doing studies (user studies, perceptual studies, etc.). While more studies are a good thing, they need to be done with care. I’ve seen quite a few papers lately where the study design is terrible, or where the study doesn’t actually prove the claim.
  • The marketing fluff piece. There are actually several variants of this. One is essentially a marketing piece about some piece of software that is disguised (often badly) as a systems or application paper. There’s nothing wrong with a good application paper (in fact, there should be more), but there needs to be content and a contribution. Also, the language needs to be a bit more academic and a bit less over the top.
  • The complete lack of details fluff piece. Another variation of this is the one where everything is written with such a complete lack of any specific details, that it’s impossible to know if any of it is real or not. Like a wet bar of soap, the paper slips through your fingers and won’t let you get a clear grasp on what it’s trying to tell you. These can be maddening, because they make you feel like you’re too dumb to get them, when it’s really the paper’s fault. Authors sometimes don’t want to be too specific when they’re not sure or they’re afraid that they’ve made a bad decision – but trying to paper over these issues just makes them a bigger target for reviewers.
  • The bait-and-switch. Make promises in the title, abstract, and introduction that you then don’t end up keeping: that’s a major no-no, and a guarantee for bad reviews. You’re setting expectations, and you get people excited. If you then don’t deliver, you really only have yourself to blame for getting rejected. Things change as a paper gets written and evolves, but that needs to be reflected throughout the paper.
  • The reinvention of the wheel. This happens more than you’d think. The Quilt Plots paper is just a particularly egregious example, buy there are many more. Sometimes, people get carried away working on something without realizing that it has been done, or the project changes direction in a way that leads them down the same path somebody else has gone before. And sometimes, people think they have invented something new for a field they’re not familiar with, and want to publish that work there. But none of these reasons make the work acceptable.
  • The math graveyard. This is less common in information visualization, though it does happen: a paper filled with lots of gratuitous math that not only doesn’t help explain much, but makes the whole thing much harder to read. There is a lot more math in scientific visualization papers, and there it is also usually more justified. Sometimes, however, authors think they will look smart if everything is expressed in equations and mathematical symbols. That doesn’t work though, people see through that and will call you out on it.
  • The good idea. I find myself writing lots of reviews that say: good idea, bad paper! Many ideas start with good questions, but then veer off somewhere insane or useless or the study doesn’t work, etc. These are the most disappointing reviews to write, but they are also incredibly common. Take a break from paper writing, get some fresh air, and then look at your paper as if it were written by somebody else: does it do what you think it does?

Some of these are easier to avoid than others. I totally get the bait-and-switch, for example: you’re trying to do something big and awesome, but you just didn’t get it done in time, or your study didn’t give you the results you had hoped for. But then you need to go back and make sure the different parts of your paper fit together. Or do more work.

But the key is that bad papers are not usually the result of malice, but are often the result of a lack of judgment and quality control. There are also good reasons to submit bad papers, which I will get to in the next installment.


This is part of a five-part series on peer review in visualization. One posting a day will be posted throughout this week.

Teaser image by Troy Tolley, used under Creative Commons.

Posted by Robert Kosara on January 21, 2014. Filed under peer-review.