论文标题
在阅读理解中,对域和跨语性概括的对抗性增强政策搜索
Adversarial Augmentation Policy Search for Domain and Cross-Lingual Generalization in Reading Comprehension
论文作者
论文摘要
阅读理解模型通常过于适应培训数据集的细微差别,并且在对抗评估中失败。用对手增强数据集的培训可以提高针对那些对抗性攻击的鲁棒性,但损害了模型的概括。在这项工作中,我们介绍了几个有效的对手和自动数据增强策略搜索方法,目的是使阅读理解模型更适合对抗性评估,同时也改善了对源域以及新域和语言的概括。我们首先提出了三种生成质量检查对手的新方法,这些方法在上下文中引入了多个混乱,显示了对干扰物的插入位置的依赖,并揭示了将对抗性策略与句法和语义释义方法混合的复合效果。接下来,我们发现用均匀抽样的对手增强训练数据集可以提高对对抗性攻击的鲁棒性,但会导致原始未加重数据集的性能下降。我们通过RL和更有效的贝叶斯政策搜索方法来解决此问题,以自动学习大型搜索空间中每个对手的转换概率的最佳增强策略组合。使用这些学识渊博的政策,我们表明对抗性训练可以导致内域,偏见和跨语言(德语,俄罗斯,土耳其语)概括的显着改善。
Reading comprehension models often overfit to nuances of training datasets and fail at adversarial evaluation. Training with adversarially augmented dataset improves robustness against those adversarial attacks but hurts generalization of the models. In this work, we present several effective adversaries and automated data augmentation policy search methods with the goal of making reading comprehension models more robust to adversarial evaluation, but also improving generalization to the source domain as well as new domains and languages. We first propose three new methods for generating QA adversaries, that introduce multiple points of confusion within the context, show dependence on insertion location of the distractor, and reveal the compounding effect of mixing adversarial strategies with syntactic and semantic paraphrasing methods. Next, we find that augmenting the training datasets with uniformly sampled adversaries improves robustness to the adversarial attacks but leads to decline in performance on the original unaugmented dataset. We address this issue via RL and more efficient Bayesian policy search methods for automatically learning the best augmentation policy combinations of the transformation probability for each adversary in a large search space. Using these learned policies, we show that adversarial training can lead to significant improvements in in-domain, out-of-domain, and cross-lingual (German, Russian, Turkish) generalization.