Great Validations

The Personal History, Adventures, Experiences, and Observations of a Visualization Researcher

Niklas Elmqvist
2 min readNov 22, 2024
Image by MidJourney (v6.1).

The recent discussion at the IEEE VIS 2024 panel on “(Yet Another) Evaluation Needed? A Panel Discussion on Evaluation Trends in Visualization” highlighted an ongoing challenge in visualization research: how do we systematically identify and address validation threats? As a panelist, these discussions echoed questions I frequently receive from students about potential pitfalls in visualization research validation.

Through years of evaluating visualization research — both as a researcher and educator — I’ve observed patterns in validation threats and methods that align with Tamara Munzner’s nested model for visualization design and validation. These observations, coupled with student questions about validation approaches, led me to sketch out an initial mapping between common threats and validation methods across Munzner’s four layers: domain problem characterization, data/operation abstraction design, encoding/interaction technique design, and algorithm design.

Below is this initial mapping. While informal and not exhaustive, it provides a starting point for thinking systematically about validation planning:

An important observation from this exercise is that validation methods aren’t exclusively tied to specific threats. A single validation approach might address multiple threats, while some threats might require multiple validation methods for comprehensive coverage. For instance, expert reviews can validate both problem characterization and data abstraction choices, while user studies might simultaneously validate encoding effectiveness and interaction efficiency.

This topic of systematic validation approaches has garnered increasing attention in the visualization community, as evidenced by the biannual BELIV workshop. First established in 2006, BELIV serves as an international forum for discussing visualization research methods, from novel evaluation approaches to methods for establishing the validity and scope of visualization knowledge. The workshop’s broad scope reflects the diversity of our field’s research methods and the importance of rigorous validation. I think exercises like the one above could be a useful addition to this venue.

Looking ahead, this initial mapping could benefit from expansion and refinement. A more systematic review could identify additional threats at each layer, catalog validation methods’ effectiveness for different threats, and develop clearer guidance for selecting appropriate validation strategies. The visualization research community would benefit from more structured approaches to validation planning, moving beyond simple checklists to nuanced understanding of how different validation methods address specific threats.

--

--

Niklas Elmqvist
Niklas Elmqvist

Written by Niklas Elmqvist

Villum Investigator, Fellow of the ACM and IEEE, and Professor of Computer Science at Aarhus University.

No responses yet