论文标题
Minimax分类具有0-1的损失和绩效保证
Minimax Classification with 0-1 Loss and Performance Guarantees
论文作者
论文摘要
监督分类技术使用培训样本来查找针对0-1损失小的分类规则。传统方法通过最大程度地减少对特定规则家族的替代损失来实现有效的学习和样本外概括。本文介绍了不依赖替代损失和规则家庭的最小风险分类器(MRC)。 MRC通过最大程度地减少预期损失0-1损失W.R.T.来实现有效的学习和样本外的概括。由线性约束定义并包括真正的基础分布的不确定性集。此外,MRC的学习阶段提供了性能保证,作为预期0-1损失的下部和上限范围。我们还以训练尺寸和最小的Minimax风险来介绍MRCS的有限样本泛化范围,并显示其竞争性分类性能W.R.T.使用基准数据集的最新技术。
Supervised classification techniques use training samples to find classification rules with small expected 0-1 loss. Conventional methods achieve efficient learning and out-of-sample generalization by minimizing surrogate losses over specific families of rules. This paper presents minimax risk classifiers (MRCs) that do not rely on a choice of surrogate loss and family of rules. MRCs achieve efficient learning and out-of-sample generalization by minimizing worst-case expected 0-1 loss w.r.t. uncertainty sets that are defined by linear constraints and include the true underlying distribution. In addition, MRCs' learning stage provides performance guarantees as lower and upper tight bounds for expected 0-1 loss. We also present MRCs' finite-sample generalization bounds in terms of training size and smallest minimax risk, and show their competitive classification performance w.r.t. state-of-the-art techniques using benchmark datasets.