论文标题
通过软边缘的梯度下降的半空间的不可知论学习
Agnostic Learning of Halfspaces with Gradient Descent via Soft Margins
论文作者
论文摘要
我们分析了梯度下降对线性半空间不可知的学习零损失的梯度下降的特性。如果$ \ Mathsf {opt} $是半空间获得的最佳分类错误,则通过吸引软边缘的概念,我们能够证明梯度下降找到了具有分类错误$ \ tilde $ \ tilde o(\ mathsf {\ mathsf {opt}^{1/2}^{1/2}) + \ varepsilon + \ varepsilon + $ \ mathrm {poly}(d,1/\ varepsilon)$时间和示例复杂性,用于一系列广泛的分布,其中包括log-conconcave同进分布作为子类。一路上,我们回答了Ji等人最近提出的一个问题。 (2020)关于损失函数的尾巴行为如何影响样品复杂性和梯度下降的运行时间保证。
We analyze the properties of gradient descent on convex surrogates for the zero-one loss for the agnostic learning of linear halfspaces. If $\mathsf{OPT}$ is the best classification error achieved by a halfspace, by appealing to the notion of soft margins we are able to show that gradient descent finds halfspaces with classification error $\tilde O(\mathsf{OPT}^{1/2}) + \varepsilon$ in $\mathrm{poly}(d,1/\varepsilon)$ time and sample complexity for a broad class of distributions that includes log-concave isotropic distributions as a subclass. Along the way we answer a question recently posed by Ji et al. (2020) on how the tail behavior of a loss function can affect sample complexity and runtime guarantees for gradient descent.