论文标题
$ \ ell_0 $ regularized优化的牛顿方法
Newton method for $\ell_0$-regularized optimization
论文作者
论文摘要
作为一种可进行的方法,稀疏优化经常采用正则化。这引起了正规化的优化,旨在最大程度地减少$ \ ell_0 $ norm或其连续的替代物,这些替代表征了稀疏性。从替代物的连续性到$ \ ell_0 $ norm的离散性,最具挑战性的模型是$ \ ell_0 $ regularized优化。为了征服这种硬度,在开发有效的方法方面有大量工作。但是,他们中的大多数仅享受(子)序列从确定性优化的角度收敛到固定点,或者每个迭代和任何给定的稀疏参考点之间的距离都受到概率意义上绑定的误差的限制。在本文中,我们为$ \ ell_0 $调查优化开发了一种牛顿型方法,并证明生成的序列在全球范围内收敛到固定点,并在标准假设下互相四次聚集,从理论上讲,我们的方法能够出人意料地表现出色。
As a tractable approach, regularization is frequently adopted in sparse optimization. This gives rise to the regularized optimization, aiming at minimizing the $\ell_0$ norm or its continuous surrogates that characterize the sparsity. From the continuity of surrogates to the discreteness of $\ell_0$ norm, the most challenging model is the $\ell_0$-regularized optimization. To conquer this hardness, there is a vast body of work on developing numerically effective methods. However, most of them only enjoy that either the (sub)sequence converges to a stationary point from the deterministic optimization perspective or the distance between each iterate and any given sparse reference point is bounded by an error bound in the sense of probability. In this paper, we develop a Newton-type method for the $\ell_0$-regularized optimization and prove that the generated sequence converges to a stationary point globally and quadratically under the standard assumptions, theoretically explaining that our method is able to perform surprisingly well.