论文标题
通过低级知觉提示,高级认知任务和决策过程分析3D卷进行分割
Analyzing 3D Volume Segmentation by Low-level Perceptual Cues, High-level Cognitive Tasks, and Decision-making Processes
论文作者
论文摘要
在许多科学和医学应用中,3D体积细分是一项基本任务。有效地产生准确的分割是具有挑战性的,部分原因是数据质量低(例如噪声和低图像分辨率)和数据中只能通过对结构的高级知识来解决的数据中的歧义。确实存在自动算法,但是在许多用例中,它们失败了。黄金标准仍然是手动细分或审查。不幸的是,即使对于专家来说,手动细分也很费力,耗时且容易出错。现有的3D分割工具通常是基于基本算法设计的,并且不考虑人类的心理模型,其低级感知能力和更高级别的认知任务。我们的目标是使用关键决策方法(CDM)分析手动细分,以便更好地了解分割者使用的低级(感知和标记)动作以及更高级别的决策过程。我们面临的一个关键挑战是,决策是由一组累积的低级视觉空间决策组成,这些决策相关且难以口头表达。为了解决这个问题,我们开发了一种新颖的混合协议,该协议将CDM与眼睛跟踪,观察和目标问题集成在一起。在本文中,我们为此混合数据集开发和验证数据编码方案,以辨别分段者的低级动作,高级认知任务,整体任务结构和决策过程。我们成功地根据任务序列和反映在眼睛凝视数据中的微型决策来检测视觉处理的变化,并确定了分段者使用的不同分割决策策略。
3D volume segmentation is a fundamental task in many scientific and medical applications. Producing accurate segmentations efficiently is challenging, in part due to low imaging data quality (e.g., noise and low image resolution) and ambiguity in the data that can only be resolved with higher-level knowledge of the structure. Automatic algorithms do exist, but there are many use cases where they fail. The gold standard is still manual segmentation or review. Unfortunately, even for an expert, manual segmentation is laborious, time consuming, and prone to errors. Existing 3D segmentation tools are often designed based on the underlying algorithm, and do not take into account human mental models, their lower-level perception abilities, and higher-level cognitive tasks. Our goal is to analyze manual segmentation using the critical decision method (CDM) in order to gain a better understanding of the low-level (perceptual and marking) actions and higher-level decision-making processes that segmenters use. A key challenge we faced is that decision-making consists of an accumulated set of low-level visual-spatial decisions that are inter-related and difficult to articulate verbally. To address this, we developed a novel hybrid protocol which integrates CDM with eye-tracking, observation, and targeted questions. In this paper, we develop and validate data coding schemes for this hybrid data set that discern segmenters' low-level actions, higher-level cognitive tasks, overall task structures, and decision-making processes. We successfully detect the visual processing changes based on tasks sequences and micro decisions reflected in the eye-gaze data and identified different segmentation decision strategies utilized by the segmenters.