Veni, Vidi, Validus

Recalibrating the Role of Validation in Visualization and HCI Research

Niklas Elmqvist
3 min readOct 10, 2024
Image by MidJourney (v6.1).

Validation is a cornerstone principle of the scientific method. All scientific work should be validated in a replicable and reproducible manner. However, a persistent misconception in the visualization and human-computer interaction (HCI) field is that the IEEE VIS and ACM CHI conferences require a user study for all papers. In other words, for many people, the words “validation” and “evaluation” are synonymous. This notion, while widespread, is misguided. The truth is more nuanced and logical: the validation should be tailored to the claimed contribution of the research. All papers require validation, true, but validation does not always mean controlled empirical evaluation with human participants.

The Golden Rule: Match Validation to Contribution

The fundamental principle governing validation is straightforward: your research paper should prove that your claims are true. In practice, this means that the nature of your contribution dictates the appropriate validation method.

Many VIS and CHI papers propose new and supposedly better methods to for users to achieve a task. Logically, such improvements must be demonstrated through a user study. However, if your contribution is of a different nature, your validation approach should reflect that difference.

Consider this: no one would expect a paper proposing a new database storage mechanism to include a user study. Yet, in the visualization and HCI fields, we often see reviewers reflexively asking for user studies regardless of the paper’s actual claims. This perpetuates the tired meme that visualization and HCI venues increasingly require user studies for publication. It’s simply not true, but it may turn into a self-fulfilling prophecy.

The Nested Model: A Framework for Validation

Tamara Munzner’s “nested evaluation model,” introduced in 2009 and elaborated in her textbook “Visualization Analysis and Design,” provides a useful framework for thinking about design and validation in visualization research. Munzner describes four nested layers:

  1. Domain problem characterization;
  2. Data/operation abstraction design;
  3. Encoding/interaction technique design; and
  4. Algorithm design.

Your research contribution can occur at any of these layers (even several of them), each with its own typical threats and corresponding validation methods.

For instance, if you’re proposing a new visual representation, the primary threat might be its effectiveness for a particular type of data. In this case, a graphical perception study with human participants would be an appropriate validation method.

However, if you’re introducing a new visualization toolkit, the main threat might rather be the toolkit’s ability to express the desired visualizations. Here, validation might involve implementing common visualizations using the toolkit, analytically exploring its expressiveness, or conducting case studies with visualization developers. A user study would likely be inappropriate in this context.

A Call for Thoughtful Review

This concept of validation matching contribution is logical and straightforward, yet it seems surprisingly difficult for many to grasp. In fact, I can’t believe we’re still fighting this fight. When I showed this position statement to Tamara, she referred to her own statement from a panel that was held at IEEE VIS 2013 that was called “How Much Evaluation is Enough?” My opinions here are consistent with her thoughts from more than a decade ago. It is frustrating that we are still encountering the cargo cult mentality where some reviewers perceive all papers to require a user study, and where some authors conduct user studies without knowing how or why. I am sure things have improved since that 2013 panel, but I am still seeing evidence of it, both as a papers chair, reviewer, as well as an author myself.

So the next time you find yourself reviewing a paper, pay close attention to what the authors are claiming and judge their validation based on that claim. Similarly, when you are planning your next research project, make sure you have a firm grasp of what you are claiming, and then design your validation accordingly.

By moving beyond the “user study or bust” mentality, we can foster more diverse and innovative research in visualization and HCI. Let’s ensure that our validation methods truly serve to strengthen our claims rather than adhering to unfair and misguided expectations.

Remember: your contribution governs your validation. It’s time we fully embrace this principle in our research and review processes.

--

--

Niklas Elmqvist

Professor in visualization and human-computer interaction at Aarhus University in Aarhus, Denmark.