Wave of Hand, Command of Screen
Becoming a Datamancer in a Multi-Screen World
Technology has always promised us magical interfaces — from Minority Report’s dramatic gestures to Iron Man’s holographic displays. Together with a team of collaborators at University of Maryland and Aarhus University, we have created Datamancer, a wearable system that brings that magic to life, letting users control data visualizations across multiple screens with simple hand movements. My Ph.D. student Biswaksen Patnaik — who incidentally also is on the job market! — will be presenting this work at the upcoming ACM CHI 2025 conference in Yokohama, Japan at the end of April.
Imagine walking into a conference room filled with screens showing complex data. Instead of fumbling with multiple mice or remote controls, you simply point at a screen to select it, then use hand gestures to manipulate the visualizations. Zoom into a map with one hand while selecting data points with the other. Transfer images between displays with a pinch and release. This is what Datamancer enables: a seamless way to interact with data in spaces with multiple displays.
The problem we tackled is universal: when we analyze data across multiple screens, switching between different input devices creates frustrating barriers. Traditional solutions such as motion capture systems require expensive equipment permanently installed in a room. Other approaches limit us to specific areas or require bulky headsets. We wanted something portable that works anywhere.
Our solution (see figure above) combines two key components: a small camera mounted on a ring that you wear on your finger, and a gesture sensor worn on your chest. The ring camera lets you select any screen by pointing at it, while the chest sensor tracks your hand movements in 3D space. All of this is powered by a small computer that fits on your body.
When testing Datamancer with a senior analyst from a transportation laboratory, we discovered how it could transform their work. The analyst described a typical scenario: standing in a room with six large displays showing traffic patterns, urban planning models, and transportation data. Currently, they control these displays by awkwardly using a wireless mouse on their thigh as they walk around the room presenting to clients.
“It would be very impressive if I could walk into a room where you’ve got 20 people with laptops and all of a sudden I’m able to throw things around onto their machines,” the analyst told us. “I think the wearable device could be used for shocking [people] on a big presentation, just to do impressive fast things.”
Beyond these dramatic presentations, Datamancer also solves practical problems. The analyst described the “catastrophic” situation when a computer doesn’t boot, making it extremely difficult to rearrange content across the remaining screens. Since Datamancer works with any web-connected display, it provides flexibility when technical problems arise.
Our user study, conducted in a controlled laboratory setting with 12 participants, revealed that people quickly adapt to using Datamancer. One participant noted, “It’s pretty smooth actually, after a few minutes of human calibration.” Another said, “I thought it was intuitive, like actually pointing the button to select a screen felt awesome as an idea.”
While we originally designed Datamancer for data analytics, participants suggested many other applications: in classrooms, laboratories where analysts wear gloves, or collaborative business settings. As one participant explained, “You’re not stuck at a desk or at a podium trying to move things; you can actually walk around, which makes people more engaging.”
By bringing gestural control to everyday multi-display environments, we’re working to make the fictional worlds of science fiction movies a reality. No longer just for Hollywood’s Tony Stark or Tom Cruise, now anyone can become a Datamancer, commanding screens with the wave of a hand.
This research will be presented at the upcoming ACM CHI 2025 conference at the end of April in Yokohama, Japan.
Reference
- Biswaksen Patnaik, Marcel Borowski, Huaishu Peng, Clemens N. Klokmose, and Niklas Elmqvist. (2025) “Datamancer: Bimanual Gesture Interaction in Multi-Display Ubiquitous Analytics Environments.” In Proceedings of the ACM Conference on Human Factors in Computing Systems, April 26-May 1, 2025, Yokohama, Japan. [PDF]