论文标题

弹性梯度下降,一种迭代优化方法,近似于弹性净的溶液路径

Elastic Gradient Descent, an Iterative Optimization Method Approximating the Solution Paths of the Elastic Net

论文作者

Allerbo, Oskar, Jonasson, Johan, Jörnsten, Rebecka

论文摘要

弹性网将拉索和山脊回归结合在一起,将拉索的稀疏性与山脊回归的分组特性融合在一起。先前已经显示了脊回归与梯度下降以及拉索和前阶段回归之间的连接。类似于弹性网概括拉索和脊回归的方式,我们引入了弹性梯度下降,梯度下降和前向阶段回归的概括。我们理论上分析了弹性梯度下降,并将其与弹性网和前向阶段回归进行比较。该分析的一部分基于弹性梯度流,这是一种分段分析构建,可用于无限步骤尺寸的弹性梯度下降。我们还将弹性梯度下降与真实数据和模拟数据上的弹性网进行了比较,并表明它提供了类似的解决方案路径,但更快的数量级。与正向阶段回归相比,弹性梯度下降选择了一个模型,该模型虽然仍然稀疏,但提供了较低的预测和估计误差。

The elastic net combines lasso and ridge regression to fuse the sparsity property of lasso with the grouping property of ridge regression. The connections between ridge regression and gradient descent and between lasso and forward stagewise regression have previously been shown. Similar to how the elastic net generalizes lasso and ridge regression, we introduce elastic gradient descent, a generalization of gradient descent and forward stagewise regression. We theoretically analyze elastic gradient descent and compare it to the elastic net and forward stagewise regression. Parts of the analysis are based on elastic gradient flow, a piecewise analytical construction, obtained for elastic gradient descent with infinitesimal step size. We also compare elastic gradient descent to the elastic net on real and simulated data and show that it provides similar solution paths, but is several orders of magnitude faster. Compared to forward stagewise regression, elastic gradient descent selects a model that, although still sparse, provides considerably lower prediction and estimation errors.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源