The Road to Melbourne

Visualization research from Aarhus and UMD at IEEE VIS 2023.

Niklas Elmqvist
8 min readAug 8, 2023

This year has been something of a banner year for my Ph.D. students. Over the last twelve months, we’ve had a total of five IEEE TVCG papers accepted, and another four were accepted to the most recent IEEE VIS 2023 deadline. This means that we’re bringing a total of nine(!) papers to Melbourne, this year’s venue for the IEEE VIS conference. The event will take place in October. It will be the first time that I myself visit Australia, so I am particularly excited about attending. In anticipation of the conference, here I will give a brief overview of each of these nine papers.

First up is “Sensemaking Sans Power: Interactive Data Visualization Using Color-Changing Ink”, led by my Ph.D. student Biswaksen Patnaik and co-advisor and collaborator Huaishu Peng (an assistant professor in the UMD CS department). The basic idea of the paper is based on the new generation of color-changing inks that can vary their color based on external stimulus, such as temperature, UV light, humidity, and even kinetic energy. Just think back to toys from your childhood that would change color, for example based on the temperature like this Hot Wheelz bus that belongs to my son:

How can such inks be used to create interactive visualizations without the need for digital computation, and, indeed, power? In the paper, Biswaksen explores this idea by first presenting a design space for color-changing inks and then showing several examples, including the following for interactive paper textbooks:

Our next paper is work by my former Ph.D. student — and now Ph.D. — Dr. Eric Newburger and collaborator Michael Correll titled “Fitting Bell Curves to Data Distributions using Visualization. In the paper, Eric studied how well people can fit a normal Gaussian distribution to noisy data samples represented using four different visualization techniques in a preregistered and crowdsourced experiment: bar histograms, Wilkinson dotplots, boxplots, and strip plots.

Analyzing results from 117 Turkers showed that people are good at estimating the mean of a distribution, but overestimate the standard deviation. This is the “umbrella effect” because it’s as if people want to shelter the data from the heavens above using the curve.

Next is “uxSense: Supporting User Experience Analysis with Visualization and Computer Vision” by Andrea Batch, Yipeng Ji, Mingming Fan, Jian Zhao, and moi. uxSense is an interactive visual analytics tool that uses computer vision to extract UX metrics from screen and video recordings and then visualizes them using a web-based timeline panel to enable a UX researcher to analyze results from usability studies.

Our evaluation involved six UX professionals from big tech companies and engaged them in two-hour evaluation sessions where they used the uxSense tool to analyze recordings from a Tableau user learning to use that tool to visualize data. While not a replacement for human experts analyzing the video and audio recordings, our participants felt that the automated methods were excellent complements to such human labor.

InTowards Visualization Thumbnail Designs that Entice Reading Data-driven Articles”, my collaborators Hwiyeon Kim, Joohee Kim, Yunha Han, Hwajung Hong, Oh-Sang Kwon, Young-Woo Park, Sungahn Ko, and Bum Chul Kwon, and I study how to design effective thumbnails in data-driven articles. Basically, the idea is to figure out how best to create miniaturized versions of a visualization to help the reader decide whether or not to follow the link to the full article. The grid below shows 16 thumbnails organized into four types (columns) and four topics (rows). The paper ends with a set of design guidelines for how to design effective visualization thumbnails.

Last of our five IEEE TVCG papers is “The Reality of the Situation: A Survey of Situated Analytics”, with coauthors Sungbok Shin, Andrea Batch, Pete Butcher, and Panos Ritsos. This 16-page TVCG survey summarizes the current state of the art in situated analytics; the use of (1) interactive visualizations of (2) data that are (3) deployed using Augmented Reality techniques, that (4) utilize the user’s physical location, and that (5) integrate analytical reasoning. Here is an example of a fictional such situated analytics (SA) application:

The paper surveys a total of 312 candidate papers, yielding a set of 75 papers for further classification and review. Of these, we eliminate an additional 24 because they do not fully include situated reasoning by not supporting the full hierarchy of such analytical functionality: read, explore, schematize, and report.

Next comes the papers that were actually submitted to the IEEE VIS 2023 deadline and accepted for publication. Just like the above five papers, these are also published in the IEEE Transactions on Visualization Computer Graphics (in a special issue dedicated to the conference).

First is Dr. Newburger, back with another contribution: “Visualization According to Statisticians: An Interview Study on the Role of Visualization for Inferential Statistics.” Here, Eric stepped out of his comfort zone as a statistician and engaged a group of 18 fellow professional statisticians in an interview study designed to understand the visualization practices of this very expert group (there was a combined 350 years of statistical experience across his participants). The following comic captures some of the sentiments expressed by these statisticians:

The findings indicate that statisticians do make extensive use of visualization during all phases of their work (and not just when reporting results). In fact, their mental models of inferential methods tend to be mostly visually based. And finally, Eric found that many statisticians abhor dichotomous thinking, which is not very surprising, but promising for the utility of visualization and human reasoning in sensemaking tasks.

Dataopsy is a novel interactive tool for visual analysis developed by my talented Ph.D. student Md. Naimul Hoque, and in the paper titled “Dataopsy: Scalable and Fluid Visual Exploration using Aggregate Query Sculpting”, he presents this system and its underlying visualization paradigm: aggregate query sculpting (AQS). AQS is a “born-scalable” human-data interaction approach for multidimensional data and multivariate graphs. An AQS visualization starts with a single visual mark representing an aggregation of the entire dataset. The user can then progressively explore the dataset through a sequence of operations abbreviated as P6: pivot (facet an aggregate based on an attribute), partition (lay out a facet in space), peek (see inside a subset using an aggregate visual representation), pile (merge two or more subsets), project (extracting a subset into a new substrate), and prune (discard an aggregate not currently of interest).

Evaluating fairness in an income prediction dataset. (a) Participant (P1) starts by partitioning 45,000 data rows into MALE-FEMALE and TEST-TRAINING subsets. (b) Using the Y peek action, P1 evaluates the accuracy of the model on the subsets. (c) P1 further partitions male and female subsets by race. P1­ projects FEMALE-WHITE and MALE-WHITE (i.e., all WHITE individuals) into a new substrate. (d) P1 now partitions the new substrate (all WHITE) horizontally by gender and dataset and vertically by marital status.

Naimul demonstrates the AQS approach several case studies and application examples: (1) fairness evaluation for a ML training dataset; (2) using Dataopsy to adapt a screenplay from a book manuscript; (3) understanding 1.7 billion taxi trips in New York City; and (4) scientometric analysis of IEEE VIS publications.

Pramod Chundury has led the way for our efforts on accessible data visualization for a long time, and in “TactualPlot: Spatializing Data as Sound using Sensory Substitution for Touchscreen Accessibility” he presents one of the first practical data sonification tools for large-scale multidimensional data designed for smartphones (coauthors Yasmin Reyazuddin, J. Bern Jordan, Jonathan Lazar, and myself). Touch-based smartphones have quickly become very popular among blind individuals because they essentially represent a screen reader that you always bring with you. With TactualPlot, Pramod assails the problem of how to best allow a blind person explore 2D data using such a touchscreen: essentially, by “touching” the data in a scatterplot. However, since current touchscreens are unable to generate haptic feedback (besides vibrotactile), the touched data is instead sonified using spatial audio.

This idea of rerouting output from one sense (touch sensations) to another (sound) is an example of a crossmodal sensory substitution, and TactualPlot represents the first practical implementation to achieve this. Pramod designed the prototype in close collaboration and consultation with Yasmin Reyazuddin, a blind data and UX researcher who is one of our long-time collaborators and also a co-author of the paper. In fact, a large portion of the paper’s validation is a series of design workshops with Yasmin.

Finally, the ninth and final paper we will be presenting at VIS 2023 in Melbourne is “Wizualization: A ‘Hard Magic’ Visualization System for Immersive and Ubiquitous Analytics” by Andrea Batch, Pete Butcher, Panos Ritsos, and myself. The result of several years of intense research and close collaboration between the four of us, Wizualization is an extraordinary feat of engineering: a web-based eXtended Reality (XR) platform for ubiquitous and immersive analytics supporting visualization authoring and analysis for both HMDs (such as the Microsoft HoloLens 2 and the Apple Vision Pro) and handheld AR. Based on a motivating scenario running the entire length of the paper (see below), the Wizualization system is built on a “hard magic” metaphor where strict rules are provided for authoring data displays using gestures, voice, and touch interaction.

The system is based on a Grammar of Graphics for ubiquitous and immersive analytics, allowing a person to sequence interactive commands into a specification in real-time. This is the killer app of UA/IA data analytics for XR, and Andrea, Pete, Panos, and I hope to be building on this system for many years to come.

IEEE VIS 2023 is turning out to be quite an event for my students and I, and we’re looking forward to a lot of productive discussions at the conference. If any of these papers and projects seem interesting to you, please feel free to reach out (or catch us at the conference in October)!

--

--

Niklas Elmqvist

Professor in visualization and human-computer interaction at Aarhus University in Aarhus, Denmark.