论文标题
逻辑回归与总变异正则化
Logistic regression with total variation regularization
论文作者
论文摘要
我们研究逻辑回归对规范参数的总变化惩罚,并表明所得估计器满足尖锐的甲骨文不平等:估算器的多余风险适应基础信号的跳跃数量或其近似值。特别是,当有限的跳跃有限时,并且跳高与跳跃的足够分离,则估算值以参数速率收敛到对数项$ \ log n / n $,前提是调整参数$ 1 / \ sqrt n $。我们的结果将二次损失的早期结果扩展到后勤损失。我们不假定对规范参数的任何先验已知界限,而仅利用理论风险的局部曲率。
We study logistic regression with total variation penalty on the canonical parameter and show that the resulting estimator satisfies a sharp oracle inequality: the excess risk of the estimator is adaptive to the number of jumps of the underlying signal or an approximation thereof. In particular when there are finitely many jumps, and jumps up are sufficiently separated from jumps down, then the estimator converges with a parametric rate up to a logarithmic term $\log n / n$, provided the tuning parameter is chosen appropriately of order $1/ \sqrt n$. Our results extend earlier results for quadratic loss to logistic loss. We do not assume any a priori known bounds on the canonical parameter but instead only make use of the local curvature of the theoretical risk.