论文标题
软梯度提升机
Soft Gradient Boosting Machine
论文作者
论文摘要
事实证明,梯度提升机已被证明是一个成功的函数近似器,并且已广泛用于各个区域。但是,由于每个基础学习者的训练程序必须采取顺序顺序,因此将基础学习者之间的训练过程并行以加速。此外,在在线或逐步学习设置下,GBMS取得了次优表现,因为先前训练的基础学习者无法适应一旦训练的环境。在这项工作中,我们通过将多个可区分的基础学习者接线,通过注入受梯度提升启发的本地目标和全局目标来提出软梯度提升机(SGBM),然后可以通过线性加速联合优化所有基础学习者。当使用可区分的软决策树作为基础学习者时,可以将这种设备视为具有额外好处的(硬)梯度增强决策树的替代版本。实验结果表明,鉴于在线和离线设置中相同的基础学习者,SGBM具有更高的时间效率,其准确性更高。
Gradient Boosting Machine has proven to be one successful function approximator and has been widely used in a variety of areas. However, since the training procedure of each base learner has to take the sequential order, it is infeasible to parallelize the training process among base learners for speed-up. In addition, under online or incremental learning settings, GBMs achieved sub-optimal performance due to the fact that the previously trained base learners can not adapt with the environment once trained. In this work, we propose the soft Gradient Boosting Machine (sGBM) by wiring multiple differentiable base learners together, by injecting both local and global objectives inspired from gradient boosting, all base learners can then be jointly optimized with linear speed-up. When using differentiable soft decision trees as base learner, such device can be regarded as an alternative version of the (hard) gradient boosting decision trees with extra benefits. Experimental results showed that, sGBM enjoys much higher time efficiency with better accuracy, given the same base learner in both on-line and off-line settings.