论文标题

微分:随着时间的推移,对交互式数据系统的心理模型进行更深入的评估

Micro-entries: Encouraging Deeper Evaluation of Mental Models Over Time for Interactive Data Systems

论文作者

Block, Jeremy E., Ragan, Eric D.

论文摘要

许多交互式数据系统将数据的视觉表示与嵌入式算法支持对自动化和数据探索的支持。为了有效地支持透明和可解释的数据系统,对于研究人员和设计师来说,了解用户如何理解该系统非常重要。我们讨论了用户对系统逻辑心理模型的评估。心理模型在捕获和分析方面具有挑战性。虽然常见的评估方法旨在在系统使用一段时间后近似用户的最终心理模型,但随着用户随着时间的推移,用户理解随着时间的流逝而不断发展。在本文中,我们回顾了许多常见的心理模型测量技术,讨论权衡方案,并推荐使用交互式数据分析和可视化系统时对心理模型进行更深入,更有意义的评估的方法。我们提出了随着时间的推移评估心理模型的指南,这些指南揭示了特定模型更新的演变以及它们如何映射到接口功能和数据查询的特定用途。通过要求用户描述他们知道的知识以及他们如何知道,研究人员可以收集对用户概念化过程的结构化,定期的见解,同时还可以帮助用户指导用户发现自己的发现。

Many interactive data systems combine visual representations of data with embedded algorithmic support for automation and data exploration. To effectively support transparent and explainable data systems, it is important for researchers and designers to know how users understand the system. We discuss the evaluation of users' mental models of system logic. Mental models are challenging to capture and analyze. While common evaluation methods aim to approximate the user's final mental model after a period of system usage, user understanding continuously evolves as users interact with a system over time. In this paper, we review many common mental model measurement techniques, discuss tradeoffs, and recommend methods for deeper, more meaningful evaluation of mental models when using interactive data analysis and visualization systems. We present guidelines for evaluating mental models over time that reveal the evolution of specific model updates and how they may map to the particular use of interface features and data queries. By asking users to describe what they know and how they know it, researchers can collect structured, time-ordered insight into a user's conceptualization process while also helping guide users to their own discoveries.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源