论文标题

图像分割的空间正则化,体积和星形先验的深度卷积神经网络

Deep Convolutional Neural Networks with Spatial Regularization, Volume and Star-shape Priori for Image Segmentation

论文作者

Liu, Jun, Wang, Xiangyue, Tai, Xue-cheng

论文摘要

我们使用深层卷积神经网络(DCNN)解决图像分割问题。 DCNN可以很好地从自然图像中提取特征。但是,CNN现有网络体系结构中的分类功能很简单,并且缺乏以许多众所周知的传统变异模型所做的方式处理重要空间信息的功能。在现有dcnns无法很好地处理诸如空间规律性,体积先验和对象形状的情况下。我们提出了一种新型的软阈值动力学(STD)框架,可以轻松地将经典变分模型的许多空间先验整合到DCNN中以进行图像分割。我们方法的新颖性是将软马克斯激活函数解释为变异问题中的双重变量,因此可以将许多空间先验施加在双重空间中。从这个角度来看,我们可以构建一个基于STD的框架,该框架可以使DCNN的输出具有许多特殊的先验,例如空间规律性,音量约束和星形先验。所提出的方法是一个一般的数学框架,可以应用于任何语义分割DCNNS。为了显示我们方法的效率和准确性,我们将其应用于流行的DEEPLABV3+图像分割网络,实验结果表明,我们的方法可以在数据驱动的图像分割DCNN上有效地工作。

We use Deep Convolutional Neural Networks (DCNNs) for image segmentation problems. DCNNs can well extract the features from natural images. However, the classification functions in the existing network architecture of CNNs are simple and lack capabilities to handle important spatial information in a way that have been done for many well-known traditional variational models. Prior such as spatial regularity, volume prior and object shapes cannot be well handled by existing DCNNs. We propose a novel Soft Threshold Dynamics (STD) framework which can easily integrate many spatial priors of the classical variational models into the DCNNs for image segmentation. The novelty of our method is to interpret the softmax activation function as a dual variable in a variational problem, and thus many spatial priors can be imposed in the dual space. From this viewpoint, we can build a STD based framework which can enable the outputs of DCNNs to have many special priors such as spatial regularity, volume constraints and star-shape priori. The proposed method is a general mathematical framework and it can be applied to any semantic segmentation DCNNs. To show the efficiency and accuracy of our method, we applied it to the popular DeepLabV3+ image segmentation network, and the experiments results show that our method can work efficiently on data-driven image segmentation DCNNs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源