论文标题
跨任务代表性学习解剖学地标检测
Cross-Task Representation Learning for Anatomical Landmark Detection
论文作者
论文摘要
最近,人们对自动检测解剖学地标的需求不断增长,这些地标提供丰富的结构信息以促进后续的医学图像分析。与此任务相关的当前方法通常利用深神经网络的力量,而在医疗应用中进行微调此类模型的主要挑战是由于标记样品数量不足。为了解决这个问题,我们建议通过交叉任务来实现跨源和目标任务的知识转移。提出的方法用于提取面部解剖学地标,这些地标有助于诊断胎儿酒精综合征。这项工作中的来源和目标任务分别是面部识别和地标检测。提出方法的主要思想是将源模型的功能表示形式保留在目标任务数据上,并利用它们作为正规化目标模型学习的其他监督信号来源,从而在有限的培训样本下提高其性能。具体而言,我们通过约束目标模型上的最终模型或中间模型特征,为提出的表示学习提供了两种方法。临床面部图像数据集的实验结果表明,所提出的方法在很少的标记数据方面很好地效果很好,并且比其他方法比其他方法都优于其他方法。
Recently, there is an increasing demand for automatically detecting anatomical landmarks which provide rich structural information to facilitate subsequent medical image analysis. Current methods related to this task often leverage the power of deep neural networks, while a major challenge in fine tuning such models in medical applications arises from insufficient number of labeled samples. To address this, we propose to regularize the knowledge transfer across source and target tasks through cross-task representation learning. The proposed method is demonstrated for extracting facial anatomical landmarks which facilitate the diagnosis of fetal alcohol syndrome. The source and target tasks in this work are face recognition and landmark detection, respectively. The main idea of the proposed method is to retain the feature representations of the source model on the target task data, and to leverage them as an additional source of supervisory signals for regularizing the target model learning, thereby improving its performance under limited training samples. Concretely, we present two approaches for the proposed representation learning by constraining either final or intermediate model features on the target model. Experimental results on a clinical face image dataset demonstrate that the proposed approach works well with few labeled data, and outperforms other compared approaches.