论文标题

通过安全筛查规则快速奥斯卡和猫头鹰回归

Fast OSCAR and OWL Regression via Safe Screening Rules

论文作者

Bao, Runxue, Gu, Bin, Huang, Heng

论文摘要

有序加权$ L_ {1} $(OWL)正规化回归是用于高维稀疏学习的新回归分析。近端梯度方法用作解决猫头鹰回归的标准方法。但是,由于大量计算成本和记忆使用情况,解决猫头鹰回归仍然是一个燃烧的问题。在本文中,我们通过通过迭代策略探索未知订单结构的原始解决方案的顺序,提出了猫头鹰回归的第一个安全筛选规则,从而克服了解决不可分割的正规剂的困难。它有效地避免了在学习过程中系数必须为零的参数的更新。更重要的是,提出的筛查规则可以轻松地应用于标准和随机近端梯度方法。此外,我们证明了带有筛选规则的算法可以通过原始算法具有相同的结果。各种数据集的实验结果表明,与现有的竞争算法相比,我们的筛选规则会导致显着的计算增益而没有任何准确性。

Ordered Weighted $L_{1}$ (OWL) regularized regression is a new regression analysis for high-dimensional sparse learning. Proximal gradient methods are used as standard approaches to solve OWL regression. However, it is still a burning issue to solve OWL regression due to considerable computational cost and memory usage when the feature or sample size is large. In this paper, we propose the first safe screening rule for OWL regression by exploring the order of the primal solution with the unknown order structure via an iterative strategy, which overcomes the difficulties of tackling the non-separable regularizer. It effectively avoids the updates of the parameters whose coefficients must be zero during the learning process. More importantly, the proposed screening rule can be easily applied to standard and stochastic proximal gradient methods. Moreover, we prove that the algorithms with our screening rule are guaranteed to have identical results with the original algorithms. Experimental results on a variety of datasets show that our screening rule leads to a significant computational gain without any loss of accuracy, compared to existing competitive algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源