Honest Data Visualization

Project goals
Data is becoming increasingly available, and visualized through charts, graphs and diagrams. However, it can be deceiving to the audience - intentionally or unintentionally. As a designer, it should be a responsibility to communicate data honestly without misleading the audience. And as an analyst, it should be a responsibility to correctly interpret the data without any bias.
The goal of this work is to maximize truthfulness of data visualization, from the perspective of a visualization creator as well as the audience.
Project overview
I met Dr. Emily Wall at Georgia Tech, when she was a PhD student working on practices to communicate data vis truthfully and effectively. In present day data analysis, while machine learning can benefit many problems, humans are ultimately the decision makers in various domains. However, cognitive limitations like biases can affect important decisions, for example wrongful conviction of Brandon Mayfield in the 2004 Madrid Train Bombing due to confirmation bias. Her work involves capturing cognitive limitations (eg. bias, beliefs) through interaction data to support better decision-making. I am collaborating with her group to design strategies to capture and to mitigate those limitations.
User interaction patterns can indicate conscious and unconscious biases, for eg. a job recruiter viewing only male candidate profiles on LinkedIn. The group developed models to detect and measure bias through many such interaction patterns. Now, as a designer, how can we take those bias metrics and make them usable in a visual analytics system? Specifically,

How do we design an interface that optimally mitigates the captured bias?

We derived a design space that has 8 dimensions that can be manipulated to impact a user’s cognitive and analytic processes. I designed an example system called fetch.data to illustrate potential bias mitigation strategies. It is a hypothetical dashboard for job recruiters to screen candidate resumes for hiring.

Intervention strategies for bias mitigation:
1. Where and when do we present the bias metrics? [B.2: on-demand view of bias metrics]
2. How do we promote user awareness in order to reflect on their interactions? [A.1: increase the size of the data points that have not been interacted with]
3. How can we change the direction of their interaction upon detecting bias? [B.1: if recruiter interacts with only one gender, disable the gender filter]
4. What should be the degree of guidance to mitigate bias? [D: recommend candidates that have been excluded]
5. How do we collect user feedback to inform the model? [E: pop-up allowing recruiter to provide feedback while dismissing a notification]
Mitigating Bias
UX Designer
Aug 2019 — Dec 2019
fetch.data
A belief is an idea or feeling about the truth of a statement. In the context of data, beliefs can take the form of a hypothesis about a trend in the data. Eliciting a person’s beliefs about data is an increasingly common practice in contexts such as data journalism, where recent studies have shown that eliciting beliefs about the data leads to greater engagement, reflection, and memorability about the data. Belief elicitation may also be useful in a number of other contexts. For instance, in data analysis, a system that is aware of a person’s prior beliefs about the data may present things in a different order or format to mitigate the likelihood that a person misinterprets the data (e.g., due to confirmation bias around their existing beliefs). 
Belief elicitation
Design researcher
Aug 2020 — Present
I lead this project, where I collaborated with Prof. Emily Wall (Emory University) and Prof. Yea-seul Kim (UW-Madison) to derive a design framework for belief elicitation to assist visualization creators like designers and data journalists in their design process.Our paper was accepted in EuroVis 2022, and I presented it in Rome in July.
Here's a simple example constructed by following this framework.