When Walls Have Eyes
Effective collaborative visual analytics on wall-sized displays
Collaboration and data exploration are pivotal in making sense of the ever-growing sea of information in today’s society. We — Karthik Badam, Feresteh Amini, Pourang Irani, and I — presented the Proxemic Lens to support such collaborative visual analytics on large wall-sized displays at IEEE VAST in 2016. The ideas from this work remain relevant even today, so I thought I would revive them here.
Imagine standing in front of a massive digital wall where you and your colleagues can manipulate data visualizations not just with traditional inputs like a mouse or keyboard, but with gestures, head movement, and spatial navigation. This is the idea behind the Proxemic Lens technique, which leverages both explicit gestures and implicit proxemics — essentially, how your position and movements relative to the display and other users can control the data visualization tools. This hybrid interaction method offers a seamless and intuitive way to navigate complex datasets.
Our research was driven by the need to improve how multiple users interact with large-scale displays. Traditional input methods tend to fall short in such environments, especially when several users need to collaborate. In particular, a wall-sized display is a shared resource that is utilized by each collaborator. How can we intelligently and dynamically help multiple collaborators claim a personal territory on this display?
To address this problem, we wanted to create an interaction model that was both natural and efficient. Proxemics allows users to control a personal lens (a focused area of interest on the display) simply by moving closer or further away from the display, or by adjusting their orientation to the display. This meant that actions like zooming in on data or panning across a visualization could be done without even touching the screen.
One of the key findings from our study was the preference for implicit interactions for navigation and collaboration. Users enjoyed being able to walk up to the display to zoom in on a dataset or step back to see the broader picture. This physical movement mirrored natural human behavior, making the interaction feel more intuitive. However, for more precise actions, such as selecting a specific region or terminating a lens, users preferred explicit gestures, like mid-air hand movements.
How to balance implicit interactions inferred from the user’s behavior versus explicit actions directly triggered by the user is even more important today, where tools are increasingly AI-infused. The lesson from Proxemic Lens is clear: implicit actions can be used for continuous interaction like navigation, but explicit actions are needed for discrete operations such as deletion and selection.
Consider a scenario where two analysts are examining a multiscale visualization of time-series data. One analyst points to a region of interest with a mid-air gesture, creating a lens that zooms in on that area. Meanwhile, their colleague approaches with their own lens. As they come closer, their lenses automatically merge, allowing them to combine and compare their data insights effortlessly. This fluid transition between individual and collaborative work modes is central to the Proxemic Lens.
Our evaluation showed significant user performance improvements with this hybrid technique, particularly in environments where collaboration and rapid data exploration are critical. The feedback was overwhelmingly positive, with users appreciating the balance between implicit and explicit interactions.
With the rise of remote work and virtual collaboration tools, there is a growing need for intuitive and efficient interaction models that can bridge the gap between physical and digital workspaces. Large displays remain a staple in many collaborative environments, from corporate boardrooms to academic research labs, making the insights from our study valuable for ongoing innovations in human-computer interaction. As we continue to explore new ways to interact with data, the principles behind the Proxemic Lens — leveraging natural human movements and spatial relationships — offer a robust foundation for future developments. By revisiting and building upon this work, researchers and practitioners can further enhance the tools that facilitate our understanding of complex data in collaborative settings.
Full Citation
- Sriram Karthik Badam, Feresteh Amini, Niklas Elmqvist, Pourang Irani. Supporting Visual Exploration for Multiple Users in Large Display Environments. In Proceedings of the IEEE Conference on Visual Analytics Science & Technology, 2016.