It’s the Reviews, Silly

Niklas Elmqvist
6 min readJun 22, 2017

--

It’s that time of the year again: Tim Dwyer, Steve Franconeri, and I (the InfoVis 2017 papers co-chairs) just sent off notifications for each of the 170 submissions that were submitted to this year’s InfoVis. That’s a total of 40 (conditional) accepts, and 130 (definitive) rejects, for a provisional acceptance rate of 23.5%. The other two sister conferences — SciVis and VAST — have comparable acceptance rates to us. This is obviously a lot of bad news for a lot of people, and has spawned critical discussion and controversies in the community, some of it public, but most of it no doubt private and happening among individual author teams.

First of all, let’s acknowledge that a 22–25% acceptance rate (which is the approximate range that these conferences have fallen to in the last few years) is low. It is by no means as low as some other research fields, where <10% conference acceptance rates can be common. Furthermore, accurate or not, a low acceptance rate is also taken as a sign of quality in many performance review processes, where a conference with an acceptance rate above 30–35% is often seen as second-tier or worse.

I definitely agree that there is an inherent danger to maintaining a low acceptance rate over a long period of time, particularly because the peer review process used to making these decisions is far from perfect. One possible outcome is that a field becomes closed, rewarding only specific kinds of submissions from specific kinds of researchers, which leads to new contributors having a hard time joining the field and, eventually, the field stagnating and becoming divorced from practice. These problems are the topic of another blog post, or maybe even someone wiser than me to ponder.

Having served as papers co-chair for InfoVis twice now, it’s clear to me that there really is no easy solution to low acceptance rates. In the case of the VIS conferences, the real culprits are the (1) limited physical size of the event, and (2) the limited page budget that gets allocated by IEEE TVCG for the proceedings. The first factor is difficult to tackle: VIS can’t really grow much more because it already spans an entire week, and multiple parallel tracks is consistently unpopular with its attendees. The second factor is equally intractable. For several years now, all papers have been published as special issue in the IEEE TVCG journal. This has several benefits, such as the archival nature and two-round journal review process of TVCG, but it has a significant drawback in that there is a fixed page budget for the special issue. In other words, there is a hard limit to how many papers can be presented each year across all conferences. Specific solutions that others have proposed include more specialized symposia co-located with the main conferences, the notion of conference papers, workshop papers, and posters for early or incomplete work, as well as the possibility for regular TVCG papers to be presented at the conference.

However, at the risk of sounding snobbish, I would argue that the low acceptance rate is not the main problem. There now exists plenty of visualization venues where good work will eventually be published. One conference has to be the most selective one, and this role has fallen upon the IEEE VIS conferences by virtue of long standing and increasingly high standards. This means that the 130 submissions to InfoVis 2017 that were rejected will not necessarily have to wait in limbo until next year’s conference, but can be resubmitted to any number of conferences or journals of good standing (I even wrote a blog post about this some time back). If the authors take reviewer feedback into account, they will have an even better chance of their work being accepted.

No, low acceptance rate is not a problem as long as VIS has fair, equitable, and constructive reviewing. In other words, my point here is that we should continually work to improve the quality of the actual peer review process, i.e. the mechanism we use to select the papers that should be accepted. There are two sides to this: (1) ensuring that the papers selected for acceptance truly represent the top portion of the work in the field, and (2) ensuring that the papers not selected receive useful feedback that will enable them to be accepted at a later date. If we can improve peer reviewing, we will be ensuring that VIS papers actually are the best of the field, and we will be raising the floor for everyone in the visualization community by also helping the submissions that were not accepted.

How do we improve the quality of peer review? That’s a tricky question, but I have a few suggestions where to start (non-exhaustive; your ideas are welcome):

  • Providing reviewing guides — Critically reading papers and writing reviews is not trivial, and many are forced to people learn this on their own with no specific training. My own guide gives my viewpoint on what constitutes a good review; several other guides exist.
  • Highlighting common mistakes — Many common arguments that are routinely made in reviews to reject a paper are not necessarily valid. In my guide on this topic, my goal is to highlight these mistakes and explain why they should not be used.
  • Mentoring new reviewers — Many faculty members already farm out reviewing to their students, but this only works if the faculty member also tutors the students on how to review and then carefully checks the reviews for quality before they go out. (If not, this is really a form of slave labor in that the students do the work but receive none of the credit.) One model for how to formalize this is the ACM CSCW conference, which recently instituted a student reviewer mentorship program where new reviewers are paired up with seasoned mentors that help them produce high-quality reviews.
  • Sample review databases — A database of real reviews would be a good resource for both new and experienced reviewers alike, but this idea is marred with many problems, particularly with regards to the confidentiality and integrity of the review process. Even doing this for accepted papers is difficult unless it is explicitly stated in the beginning of the review process; otherwise, this is potentially a violation of the reviewers’ copyright and the stated intended use of the reviews.
  • Open peer review — An experimental and somewhat controversial practice that some journals and conferences in other fields have prototyped is to remove the anonymity from the review process. (For example. the alt.chi track at ACM CHI has experimented with this, although it is apparently being done away with for CHI 2018.) Preliminary findings from these experiments seem to be that open peer review is feasible in that it leads to equal and often higher quality reviews, with the downside that more potential reviewers turn down review invitations and the process on average takes longer to complete.
  • Creating reviewer scorecards — Reviewing is often done in a vacuum, and it can be difficult for a reviewer to calibrate themselves with the entire review community for a specific journal or conference. A reviewer scorecard can show personalized feedback to each reviewer in context of the entire reviewer pool; their own average rating, score, review length, etc, in relation to the average values for the entire pool. This would allow them to reflect on their own reviewing practices and potentially change their behavior for next time. We experimented with this for InfoVis 2016, and intend to continue this experiment for InfoVis 2017. I will write more on this topic in a future blog post.

To reiterate, I don’t think that the low acceptance rate of VIS is a significant problem as long as review quality is high. As I stated earlier, it is inevitable that conferences and venues will self-organize into rankings. In visualization, it just so happens that the IEEE VIS conferences hold the top spots. There is nothing wrong with this. My recommendation is to continue improving the quality of the review process so that the right papers get published, and so that even rejected work gets useful feedback for improvement.

Originally published at sites.umiacs.umd.edu on June 22, 2017.

--

--

Niklas Elmqvist
Niklas Elmqvist

Written by Niklas Elmqvist

Professor in visualization and human-computer interaction at Aarhus University in Aarhus, Denmark.