论文标题

当几何符合优化理论时:部分正交张量

When geometry meets optimization theory: partially orthogonal tensors

论文作者

Ye, Ke, Hu, Shenglong

论文摘要

由于张量的多线性性,大多数用于张量优化问题的算法都是基于块坐标下降方法设计的。这种算法被从业人员广泛使用其可实施性和有效性。但是,这些算法通常缺乏理论上的全球收敛性和收敛速率分析的理论保证。在本文中,我们提出了一种块坐标下降类型算法,用于低级部分正交张量近似问题,并分析其收敛行为。为了实现这一目标,我们仔细研究了与参数空间相关的部分正交张量及其几何特性的种类,这使我们能够找到相关优化问题的KKT点。借助这些几何特性,我们没有任何假设:(1)我们的算法将全球收敛到KKT点; (2)对于任何给定的张量,该算法表现出总体sublinear收敛,其显式速率比通常的$ o(1/k)$更明显,用于非convex优化的一阶方法; {(3)}对于通用张量,我们的算法收敛$ r $ - 线性。

Due to the multi-linearity of tensors, most algorithms for tensor optimization problems are designed based on the block coordinate descent method. Such algorithms are widely employed by practitioners for their implementability and effectiveness. However, these algorithms usually suffer from the lack of theoretical guarantee of global convergence and analysis of convergence rate. In this paper, we propose a block coordinate descent type algorithm for the low rank partially orthogonal tensor approximation problem and analyse its convergence behaviour. To achieve this, we carefully investigate the variety of low rank partially orthogonal tensors and its geometric properties related to the parameter space, which enable us to locate KKT points of the concerned optimization problem. With the aid of these geometric properties, we prove without any assumption that: (1) Our algorithm converges globally to a KKT point; (2) For any given tensor, the algorithm exhibits an overall sublinear convergence with an explicit rate which is sharper than the usual $O(1/k)$ for first order methods in nonconvex optimization; {(3)} For a generic tensor, our algorithm converges $R$-linearly.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源