论文标题

时间序列应用系统的数据转换与对抗性鲁棒性之间的困境

The Dilemma Between Data Transformations and Adversarial Robustness for Time Series Application Systems

论文作者

Alemany, Sheila, Pissinou, Niki

论文摘要

对抗性示例,或攻击者创建的几乎无法区分的投入,大大降低了机器学习的准确性。理论上的证据表明,数据集的高内在维度有助于对手在分类模型中开发有效的对抗示例的能力。相邻,将数据呈现给学习模型会影响其性能。例如,我们通过降低维度的技术来看到这一点,以帮助机器学习应用中的功能概括。因此,数据转换技术与智能医疗或军事系统等决策应用中的最先进的学习模型并驾齐驱。通过这项工作,我们探讨了数据转换技术如何选择功能选择,降低维度或趋势提取技术可能会影响对手在经常性神经网络上创建有效的对抗样本的能力。具体而言,我们从数据歧管的角度及其内在特征的呈现的角度进行了分析。我们的评估从经验上表明,特征选择和趋势提取技术可能会增加RNN的脆弱性。数据转换技术只有在近似数据集的固有维度,最小化编辑并保持更高的歧管覆盖率的情况下,才降低了对对抗性示例的脆弱性。

Adversarial examples, or nearly indistinguishable inputs created by an attacker, significantly reduce machine learning accuracy. Theoretical evidence has shown that the high intrinsic dimensionality of datasets facilitates an adversary's ability to develop effective adversarial examples in classification models. Adjacently, the presentation of data to a learning model impacts its performance. For example, we have seen this through dimensionality reduction techniques used to aid with the generalization of features in machine learning applications. Thus, data transformation techniques go hand-in-hand with state-of-the-art learning models in decision-making applications such as intelligent medical or military systems. With this work, we explore how data transformations techniques such as feature selection, dimensionality reduction, or trend extraction techniques may impact an adversary's ability to create effective adversarial samples on a recurrent neural network. Specifically, we analyze it from the perspective of the data manifold and the presentation of its intrinsic features. Our evaluation empirically shows that feature selection and trend extraction techniques may increase the RNN's vulnerability. A data transformation technique reduces the vulnerability to adversarial examples only if it approximates the dataset's intrinsic dimension, minimizes codimension, and maintains higher manifold coverage.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源