When the Question Is the Answer
The Role of Visualization in Human-Centered AI
The late Georges Grinstein (UMass Lowell) once told me that the only “magic” in visualization is in the formulation of questions, not their answering. In other words, he said, the moment you know the question to ask of your data, visualization ceases to be vital because that is the precise moment you can write an automatic query that can answer the question. While this is a somewhat provocative statement — Georges liked to be provocative — I think there is a lot of truth in it.
Put differently, visualization is an example of exploratory data analysis (EDA), which is a form of data-driven, bottom-up data analysis where the primary goal is to generate rather than answer hypotheses. Contrast this with confirmatory data analysis, which is embodied in most traditional mathematics and statistics methods, where the goal is to answer a question you have already formulated. It’s even central to the scientific method!
Obviously, in the process of generating a hypothesis in EDA, you also tend to answer it, but that’s besides the point. The exploratory approach to data analysis emphasizes that it is the hypothesis generation that is the key intellectual contribution when dealing with an unfamiliar dataset. In other words, knowing which questions to ask is half the battle. In fact, given a reliable query mechanism, it is most of the battle. You don’t need an interactive visualization if all you need to know is the answer to the right question.
Interestingly, in this day and age of human-centered AI tools and generative AI models, I am seeing new parallels to this discussion. You could argue that when combining LLMs and data visualization, which is an increasingly popular idea within the visualization scientific community, the key insight is that the LLM and visualization can now complement each other. In other words, if visualization is about formulating questions you did not know you had prior to visualizing the data, then an LLM can help.
Generating and Formulating Questions
As visualization practitioners, we’ve long known the power of visual exploration in uncovering patterns and generating hypotheses. However, there’s often a gap between these visual insights and their formal articulation. We frequently find ourselves reusing the same visualizations for new datasets, relying on our visual intuition rather than formalizing the queries that led to our discoveries.
This is where Large Language Models (LLMs) enter the picture; not as replacements for visualization or human insight, but as powerful human-centered AI (HCAI) tools for amplifying our analytical capabilities.
LLMs offer a unique opportunity to bridge the gap between visual insight and formal query. They provide a medium for articulating and structuring the knowledge gained from visualization. In essence, LLMs can help us translate our visual hunches into concrete, reusable queries.
This process of formalization serves several crucial functions:
- It makes our insights more reproducible and shareable.
- It allows for easier application of discovered patterns to new datasets.
- It builds a body of structured knowledge about our data over time.
Overview First, Formulate a Question, Iterate
Imagine a workflow where we begin with visual exploration, using our innate pattern recognition abilities to uncover interesting features in our data. We then use an LLM to help us articulate what we’ve observed, turning our visual insights into formal queries.
The LLM doesn’t stop there. Based on these formalized insights, it can suggest related queries or perspectives we might not have considered. This prompts us to return to our visualizations, exploring these new angles and validating our formalized understanding.
This iterative process combines the intuitive strength of visualization with the precision of formal queries, all amplified by the associative power of LLMs.
Crucially, this approach doesn’t diminish the role of visualization. We’re not replacing the “magic” of visualization in formulating questions; we’re providing a tool to capture and extend that magic. Visualization remains our primary means of discovery, our way of seeing the questions we didn’t know to ask. LLMs become tools to articulate those questions, in pushing our explorations further, and in creating a lasting, formal record of our insights.
The Future of HCAI in Visualization
As we move forward in this era of human-centered AI, the synergy between visualization and LLMs opens up exciting possibilities. We’re not just answering questions faster; we’re discovering better questions to ask. We’re not replacing human insight; we’re amplifying our ability to generate insights.
The future of data analysis lies not in AI replacing human-driven visualization, but in AI tools that enhance our ability to see, understand, and formally capture the insights hidden in our data. In this future, the magic of visualization isn’t diminished — it’s magnified, formalized, and made more powerful than ever before.