The early part of IEEE VIS 2016 is already behind us. This includes many workshops, tutorials, as well as the Doctoral Colloquium. It has been an interesting three days (considering Saturday here as well). This posting is less a report as a number of observations from a several discussions and talks.
The reason I’m including Saturday is because I served as a panelist/mentor on the Doctoral Colloquium. The goal of that is to give students an opportunity to present their work and get feedback from people other than their advisors. It is held the day before anything else starts to protect the students’ work from prying eyes – though I really doubt that there is a realistic danger of somebody seeing an idea and scooping a student.
But either way, the Doctoral Colloquium is a great invention and hugely helpful to the students. It's also a lot of fun as a mentor, especially for somebody like me who's no longer teaching himself – interactions with students are always delightful.
C4PGV: What’s a Workshop?
I gotta say, I feel a bit guilty. I was roped into submitting a paper to the workshop by my former student Aritra Dasgupta, and he wanted me to give the talk because he had to be at a different workshop. So I prepared a few slides about the paper and went to the event, not really knowing what to expect.
Next thing I knew, everybody was arguing with me (“It was like Robert against the world” is how somebody phrased it, though I forget who), and they canceled the breakout sessions to continue the discussion. It was a lot of fun, and I think we really discussed some very interesting questions. Not sure if we necessarily talked about the topics the workshop was supposed to be about, but it doesn’t seem to have bothered people all that much. Several people thanked me afterwards, so I guess it worked out.
The workshop did start out a bit on the boring side. And these things can just turn into a set of talks, like a mini-conference, even when the organizers set aside plenty of time for Q&A (as they did for both C4PGV and BELIV).
There is more to say about a few of the topics (design patterns, taxonomies, etc.), but I will write about those in more detail separately.
BELIV Keynote: Enrico Bertini
BELIV was all of today (Monday). It started with Enrico Bertini giving a nice history of the workshop and its influence on how evaluation is perceived and done in visualization. There are now more and different kinds of evaluation than there were at the time of the first BELIV in 2006. There are more qualitative studies, more replication, a slight/slow move away from just p-values and slow but steady uptake of Bayesian stats and confidence intervals. Not all of that is due to BELIV of course, but it certainly has had an effect.
One particularly interesting paper he pointed to was from BELIV 2006: Geoffrey Ellis and Alan Dix’s An Explorative Analysis of User Evaluation Studies in Information Visualisation.
For future directions, Enrico argued that we need to bridge theory and practice: evaluations can be very theoretical and optimized for nice results. But they don't necessarily do much for practitioners. Asking practitioners is helpful though to find directions and interesting research questions that can inform studies.
Enrico also pointed out the need for a more systematic probing of the visualization design space – one of the things I also argued for in my paper (though not in my talk).
An interesting discussion after the keynote revolved around the question of publishing negative results. Tamara Munzner suggested turning the study into a tech report and then writing a paper once you've figured out what went wrong. Bernice Rogowitz had some good points about study design: assume the study will fail to show what you expect and figure out how to make the experiment work either way. While those things don't address the underlying issue, they are good strategies for more publishable studies.
Future of BELIV Panel
The panel at the end of the workshop was meant as a look into the future of BELIV. Tamara Munzner dove straight into a huge problem: the replication crisis in psychology (her slides are available but not nearly the same without her talking). She described that at some length and then basically warned us to be prepared to deal with similar issues. Visualization studies run into many of the same problems: we need to be more careful with the problems Andrew Gelman discusses on his blog: investigator degrees of freedom, the garden of forking paths, p-hacking, etc.
It was also interesting to see her point people to a number of blogs, including the blog with the best title ever, sometimes i'm wrong, and Data Colada. Both deal with research ethics in psychology. She acknowledged that we don't have enough of that going on in visualization, but that we need more of that kind of backchannel where we can discuss meta-topics like how to move to more meaningful statistics, etc.
Perhaps BELIV can play a role in this, though not the way it is set up right now. It's mostly just papers, and while some of them were quite interesting, very few were on the kind of meta-level we need. Daniel Weiskopf suggested that we needed a visualization for visualization category of paper, but it's really more of a general meta-category.
On Being a Contrarian
During the Q&A after my talk at BELIV, Marti Hearst said something about playing the contrarian. I don’t remember her exact point or question, but I think she was a bit unsure about it.
The funny thing is that I find being contrarian incredibly easy and rewarding in the visualization community. It’s much more fun to argue than to agree, and it leads to lots of hugely interesting discussions and new ideas. And I always find that many people agree with my positions (finally somebody says something!). I’ve also never seen an argument in person turn nasty.
As with the C4PGV discussion, it leads to much livelier interactions than the usual polite but boring questions. In my BELIV session, I was one of three talks (though also the last one), and after my somewhat contrarian talk, all questions were directed at me (sorry Jessica and Michael…).
I’ve also gotten a good number of positive comments about the BELIV paper already, and it seems like people are reading it – even people I had not expected to, like folks mostly working in scientific visualization.
Being contrarian is really easy in visualization. The community is open to new ideas, and in fact welcomes them. And I’m not the only troublemaker. Pierre Dragicevic has been arguing for better statistics in studies for a few years, and his efforts are starting to really pay off now. Others include Matthew Kay and Jessica Hullman, just looking at the people pushing on the statistics side.
We can’t just let the field stagnate. To push it ahead, it’s not enough to just keep doing little bits of work – we need to question our assumptions and try out the opposite of many of our positions.
Being contrarian is the only way.