论文标题

排名的风险:重新审视图形感知以建模可视化性能的个体差异

The Risks of Ranking: Revisiting Graphical Perception to Model Individual Differences in Visualization Performance

论文作者

Davis, Russell, Pu, Xiaoying, Ding, Yiren, Hall, Brian D., Bonilla, Karen, Feng, Mi, Kay, Matthew, Harrison, Lane

论文摘要

图形感知研究通常使用“平均观察者”的误差来测量可视化编码有效性,从而导致数值属性编码的规范排名:然而,不同的人可能会在阅读不同的可视化类型的能力上有所不同,从而导致使用“平均观察者”模型被人口级指标捕获的个人的这种排名差异。我们可以弥合这一差距的一种方法是将经典的视觉感知任务重新塑造作为评估个人绩效的工具,除了整体可视化性能外。在本文中,我们使用贝叶斯多级回归复制和扩展了克利夫兰和麦吉尔的图形比较实验,使用这些模型从多个角度探索可视化技能的个体差异。实验和建模的结果表明,有些人显示出可靠地偏离可视化效果的规范排名的准确性模式。我们讨论了这些发现的含义,例如需要新的方式将可视化有效性传达给设计师,个人反应中的模式如何显示可视化判断中的系统偏见和策略,以及如何重新塑造经典的视觉感知任务作为评估个人绩效的工具,可以提供新的方法来量化可视化素养方面的新方法。实验数据,源代码和分析脚本可在以下存储库中获得:https://osf.io/8ub7t/?view \ _only = 9BE4798797404A4397BE3C6FC2A68CC0。

Graphical perception studies typically measure visualization encoding effectiveness using the error of an "average observer", leading to canonical rankings of encodings for numerical attributes: e.g., position > area > angle > volume. Yet different people may vary in their ability to read different visualization types, leading to variance in this ranking across individuals not captured by population-level metrics using "average observer" models. One way we can bridge this gap is by recasting classic visual perception tasks as tools for assessing individual performance, in addition to overall visualization performance. In this paper we replicate and extend Cleveland and McGill's graphical comparison experiment using Bayesian multilevel regression, using these models to explore individual differences in visualization skill from multiple perspectives. The results from experiments and modeling indicate that some people show patterns of accuracy that credibly deviate from the canonical rankings of visualization effectiveness. We discuss implications of these findings, such as a need for new ways to communicate visualization effectiveness to designers, how patterns in individuals' responses may show systematic biases and strategies in visualization judgment, and how recasting classic visual perception tasks as tools for assessing individual performance may offer new ways to quantify aspects of visualization literacy. Experiment data, source code, and analysis scripts are available at the following repository: https://osf.io/8ub7t/?view\_only=9be4798797404a4397be3c6fc2a68cc0.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源