论文标题
在弱凸度假设下优化表现风险
Optimizing the Performative Risk under Weak Convexity Assumptions
论文作者
论文摘要
在表演性预测中,一个预测模型会影响生成未来数据的分布,这一现象在经典监督学习中被忽略。在此闭环设置中,自然衡量绩效的自然衡量表现风险($ \ mathrm {pr} $)捕获了预测模型\ emph {after}部署所产生的预期损失。使用性能风险作为优化目标的核心难度是数据分布本身取决于模型参数。这种依赖性受环境的约束,而不是在学习者的控制之下。结果,即使选择凸损耗函数也可能导致高度非convex $ \ mathrm {pr} $最小化问题。先前的工作已经确定了一对有关损失的一般条件以及从模型参数到分布的映射,这意味着表现风险的凸度。在本文中,我们放宽了这些假设,并专注于获得较弱的凸度概念,而无需牺牲$ \ mathrm {pr} $最小化问题的迭代优化方法。
In performative prediction, a predictive model impacts the distribution that generates future data, a phenomenon that is being ignored in classical supervised learning. In this closed-loop setting, the natural measure of performance named performative risk ($\mathrm{PR}$), captures the expected loss incurred by a predictive model \emph{after} deployment. The core difficulty of using the performative risk as an optimization objective is that the data distribution itself depends on the model parameters. This dependence is governed by the environment and not under the control of the learner. As a consequence, even the choice of a convex loss function can result in a highly non-convex $\mathrm{PR}$ minimization problem. Prior work has identified a pair of general conditions on the loss and the mapping from model parameters to distributions that implies the convexity of the performative risk. In this paper, we relax these assumptions and focus on obtaining weaker notions of convexity, without sacrificing the amenability of the $\mathrm{PR}$ minimization problem for iterative optimization methods.