论文标题
限制在对抗性学习和应用的LE CAM类型
A Le Cam Type Bound for Adversarial Learning and Applications
论文作者
论文摘要
机器学习方法的鲁棒性对于现代实用应用至关重要。鉴于攻击和防御方法之间的武器竞赛,可能会对任何防御机制的基本限制感到好奇。在这项工作中,我们专注于从注射噪声数据中学习的问题,在这些数据中,现有文献通过假设特定的攻击方法或通过过度指定学习问题而缺乏。我们阐明了对抗性学习的信息理论限制,而无需假设特定的学习过程或攻击者。最后,我们将一般界限应用于一组规范的非平凡学习问题,并提供了常见类型的攻击示例。
Robustness of machine learning methods is essential for modern practical applications. Given the arms race between attack and defense methods, one may be curious regarding the fundamental limits of any defense mechanism. In this work, we focus on the problem of learning from noise-injected data, where the existing literature falls short by either assuming a specific attack method or by over-specifying the learning problem. We shed light on the information-theoretic limits of adversarial learning without assuming a particular learning process or attacker. Finally, we apply our general bounds to a canonical set of non-trivial learning problems and provide examples of common types of attacks.