论文标题
非负张量分解的二次收敛近端算法
A quadratically convergent proximal algorithm for nonnegative tensor decomposition
论文作者
论文摘要
在信号处理,数据分析和机器学习中,张量分解为简单级别1项是各种应用中的关键。尽管这种规范的多层分解(CPD)在温和条件下是独特的,包括非阴性等先验知识可以促进组件的解释。受高斯 - 纽顿(GN)对无约束CPD的有效性和效率的启发,我们得出了一种近端的半齿GN GN型算法,用于非负张量分解。如果算法将算法收敛到全球最佳距离,我们表明在确切情况下可以获得$ Q $ Quadratic的收敛。全局收敛是通过在前向信封功能上进行回溯来实现的。 $ Q $ - 季度收敛是通过实验验证的,我们说明,与近端梯度下降相比,使用GN步骤可显着减少(昂贵)梯度计算的数量。
The decomposition of tensors into simple rank-1 terms is key in a variety of applications in signal processing, data analysis and machine learning. While this canonical polyadic decomposition (CPD) is unique under mild conditions, including prior knowledge such as nonnegativity can facilitate interpretation of the components. Inspired by the effectiveness and efficiency of Gauss-Newton (GN) for unconstrained CPD, we derive a proximal, semismooth GN type algorithm for nonnegative tensor factorization. If the algorithm converges to the global optimum, we show that $Q$-quadratic convergence can be obtained in the exact case. Global convergence is achieved via backtracking on the forward-backward envelope function. The $Q$-quadratic convergence is verified experimentally, and we illustrate that using the GN step significantly reduces number of (expensive) gradient computations compared to proximal gradient descent.