Sitemap

“This Is Not Vis”

On the perils of gatekeeping in peer review

4 min readJun 10, 2025

--

I wager every seasoned visualization researcher has heard it in their reviews: "This is not data visualization." This phrase is wielded by self-appointed guardians of disciplinary boundaries with alarming frequency. While scope considerations have their place in peer review, this particular argument often masks intellectual gatekeeping that threatens the vitality of our field.

As a researcher who has often published at the fringes of visualization research (including on topics such as post-WIMP interfaces, ubiquitous computing, data physicalization, tangible devices, and olfactory displays), this is unfortunately a litany I have been hearing all my career. I also see it all the time as papers chair, program committee member, and reviewer. I’m writing this post now to call it out as a potentially harmful practice.

Consider how this dynamic played out with accessibility research in visualization. In 2018, my colleagues and I submitted a paper on visualization for blind and low-vision users called "Visualizing for the Non-Visual" (a play on an InfoVis 1995 paper called "Visualizing the Non-Visual"). At the time, accessibility was still an almost unknown topic at visualization conferences. Accordingly, the anonymous reviewers argued that the paper fell outside the scope of the conference. They questioned whether people who are blind could be relevant to a field premised on visual media. We had to pointedly argue in our rebuttal that rejecting research for blind users was contrary to the field’s core mission of aiding human cognition.

Almost a decade later, accessibility research flourishes at IEEE VIS. Dozens of contributions appear annually, addressing diverse populations and interaction modalities. The field is now richer for embracing this expansion. But this change required reconceptualizing visualization itself. Rather than insisting on purely visual representation, the community adopted a more generous interpretation: data visualization fundamentally concerns representing information to aid human cognition.

We're witnessing similar resistance today with AI integration. Many reviewers approach AI-enabled visualization tools with skepticism bordering on hostility. Comments dismiss LLM applications as unsuitable for "real" visualization work, echoing the conservatism that initially rejected accessibility research. While I can agree with treating generative AI with a healthy dose of skepticism, we should be careful that it does not go so far as to stifle innovation.

This resistance to change has consequences. The neighboring field of HCI has embraced human-centered AI and accordingly seen submission numbers surge. As a case in point, ACM UIST 2025 experienced a 50% growth in submissions from 2024, much of it from AI-related work. Meanwhile, IEEE VIS submissions saw an approximate 5% decrease from 2024 to 2025. One possible cause may be our field's skepticism against AI. In choosing between submitting to ACM UIST and IEEE VIS (the deadlines are only a week apart), it seems clear that most researchers working on LLMs and HCAI will submit to the former.

This gatekeeping continues today. We recently submitted work on using an LLM to help domain experts articulate the metadata needed for visualization design — a quintessentially visualization-centric problem. Yet the paper was rejected, with the predictable dismissal: “not vis.” This argument misses the point entirely. Metadata sits at visualization’s core; understanding data context, semantics, and user needs is what drives effective design. If developing novel tools to help experts articulate these very requirements doesn’t qualify as visualization research, what does?

Fields that resist change risk stagnation. Academic disciplines live or die by their ability to attract new ideas, methods, and contributors. When gatekeeping becomes reflexive, it creates a closed loop where only familiar approaches receive validation. Innovation requires space for unfamiliar work to prove its value.

This is not a call to abandon quality standards. Of course, some submissions genuinely fall outside the field’s scope. The danger lies in when the “not vis” critique becomes a reflexive dismissal of novel methods rather than a thoughtful evaluation of a paper’s core contribution to understanding data. Effective peer review must balance this necessary discernment with openness. Reviewers should approach unfamiliar contributions with curiosity rather than suspicion, considering whether novel work addresses core visualization challenges in unexpected ways.

Every new breakthrough initially looks different from established work; this is basically the definition of a breakthrough. Accessibility research seemed peripheral until it proved central. Immersive analytics was a fringe topic until it became mainstream. And today, AI still appears threatening and outside the scope of visualization to many members of our community.

If you ask me, this is a larger pattern for the visualization community: we are conservative and slow to change. Accordingly, we must now evolve with emerging technologies and expanding populations, or calcify around increasingly narrow definitions of legitimate scholarship. The vitality of data visualization as a research area hangs in the balance.

Here is what I ask: the next time you encounter work that doesn't fit familiar patterns, pause before invoking the "this is not vis" dismissal. The litmus test should simply be whether it serves visualization's fundamental purpose: helping humans understand data. Everything else is peripheral.

--

--

Niklas Elmqvist
Niklas Elmqvist

Written by Niklas Elmqvist

Villum Investigator, Fellow of the ACM and IEEE, and Professor of Computer Science at Aarhus University.

Responses (3)