From Systems to Studies
A Pragmatic Approach to Publishing HCI Research
As someone who both mentors Ph.D. students in my lab and often visits other research groups, I’ve noticed a recurring pattern: young researchers struggling to publish papers about novel systems and techniques they’ve developed. This challenge is particularly acute in Human-Computer Interaction (HCI) and data visualization fields, where system papers face unique hurdles. Over years of experience — and yes, numerous rejections — I’ve found a pragmatic strategy that helps researchers navigate this challenging publication landscape.
The difficulties of publishing system papers in HCI have been well-documented in the past. Henry Lieberman discussed “The Tyranny of Evaluation,” while Saul Greenberg and Bill Buxton explored this theme in their paper titled “User Evaluation Considered Harmful (Some of the Time).” James Landay’s blog post “I give up on CHI/UIST” further highlighted these challenges.
The fundamental problem is one of return on investment. Building a novel system requires substantial time and engineering effort, yet the path to publication remains treacherous. Reviewers often approach system papers with heightened skepticism, particularly in saturated research areas. Even when the underlying idea is elegant in its simplicity, reviewers can easily dismiss it for lacking sufficient scientific contribution or presenting unconvincing results. In fact, simplicity can be particularly provocative for certain kinds of reviewers.
Through years of hard-won experience, I’ve found an effective approach to this dilemma: instead of positioning your work as a system paper, reformulate it as an empirical exploration of the problem space your system addresses.
This approach involves a shift in framing:
- Rather than claiming your system as the primary contribution, present it as one possible solution to a broader research question;
- Focus on the empirical investigation of the problem space; and
- Use your system as a vehicle for understanding the challenges and opportunities within that space.
This reframing accomplishes several things simultaneously. First, it reduces the pressure on your system to be perfect or revolutionary. When your system is positioned as an exemplar rather than the main contribution, reviewers are less likely to fixate on its limitations.
Second, it eliminates the “horse race” mentality that often plagues system or technique evaluations. You’re no longer putting forth your solution as the best approach — instead, you’re using it to generate insights about the problem space itself. This shifts the conversation from “Is this the best possible solution?” to “What can we learn about this problem through this implementation?”
To transform your system paper into an empirical study:
- Start with the problem space rather than your solution;
- Frame your research questions around understanding the space rather than validating your approach;
- Design your evaluation to generate insights about the problem domain;
- Present your system as one way to probe these questions;
- Focus on what you learned about the problem space through building and testing your system; and
- Rename your paper from the typical system format (“<System Name>: <Scientific Claim>”) to something like “Evaluating <Scientific Claim>”.
While this strategy might appear as a compromise — even a defeat (“my paper should be strong enough to stand on its own!”) — I think that it is actually a more mature approach to research. It acknowledges that research advances not just through new solutions, but through deeper understanding of the problems we face. By reframing system papers as empirical investigations, we can contribute meaningful insights while working within the constraints of our academic publication system.
For Ph.D. students and researchers facing the system paper challenge, this approach offers a practical path forward. It’s not about abandoning innovation in system design — it’s about finding more effective ways to communicate that innovation to our academic community.