论文标题
张量火车等级最小化,非本地自相似性张量
Tensor train rank minimization with nonlocal self-similarity for tensor completion
论文作者
论文摘要
张量列车(TT)等级由于能够捕获高阶张量的全局相关性($ \ \ \ \ textrm {order}> 3 $)而受到张量完成的关注。对于三阶视觉数据,直接TT秩最小化并未利用高阶张量TT等级的潜力。 \ emph {ket增强}伴随着TT等级最小化,该}将低阶张量(例如,视觉数据)转换为高阶张量,患有严重的块状图像。为了解决此问题,我们建议通过同时探索视觉数据中的空间,时间/光谱和非本地冗余,以非本地自相似度的最小化,以实现非本地自相似性完成张量。更确切地说,TT秩最小化是通过堆叠相似的立方体在称为组的形成的高阶张量上执行的,该立方体自然而充分利用了高阶张量的TT等级的能力。此外,建立了每组TT低级别的扰动分析。我们开发了为特定结构量身定制的乘数的交替方向方法,以解决所提出的模型。广泛的实验表明,就定性和定量措施而言,该提出的方法优于几种现有的最新方法。
The tensor train (TT) rank has received increasing attention in tensor completion due to its ability to capture the global correlation of high-order tensors ($\textrm{order} >3$). For third order visual data, direct TT rank minimization has not exploited the potential of TT rank for high-order tensors. The TT rank minimization accompany with \emph{ket augmentation}, which transforms a lower-order tensor (e.g., visual data) into a higher-order tensor, suffers from serious block-artifacts. To tackle this issue, we suggest the TT rank minimization with nonlocal self-similarity for tensor completion by simultaneously exploring the spatial, temporal/spectral, and nonlocal redundancy in visual data. More precisely, the TT rank minimization is performed on a formed higher-order tensor called group by stacking similar cubes, which naturally and fully takes advantage of the ability of TT rank for high-order tensors. Moreover, the perturbation analysis for the TT low-rankness of each group is established. We develop the alternating direction method of multipliers tailored for the specific structure to solve the proposed model. Extensive experiments demonstrate that the proposed method is superior to several existing state-of-the-art methods in terms of both qualitative and quantitative measures.