论文标题

通过潜在空间虚拟对抗训练正规化

Regularization with Latent Space Virtual Adversarial Training

论文作者

Osada, Genki, Ahsan, Budrul, Bora, Revoti Prasad, Nishide, Takashi

论文摘要

虚拟对手训练(VAT)在最近开发的正规化方法中显示出令人印象深刻的结果,称为一致性正则化。 VAT利用对逆向样本,通过在输入空间中注入扰动而生成的对抗样本进行训练,从而增强了分类器的概括能力。但是,这种对抗样本只能在输入数据点周围非常小的区域内生成,这限制了此类样品的对抗有效性。为了解决这个问题,我们提出了LVAT(潜在空间增值税),该LVAT将扰动注入潜在空间而不是输入空间。 LVAT可以灵活地产生对抗样本,从而产生更大的不利影响,从而更有效的正则化。潜在空间是由生成模型构建的,在本文中,我们检查了两种不同类型的模型:变异自动编码器和归一化流,特别是发光。我们评估了使用SVHN和CIFAR-10数据集的图像分类任务的监督和半监督学习方案中方法的性能。在我们的评估中,我们发现我们的方法优于增值税和其他最先进的方法。

Virtual Adversarial Training (VAT) has shown impressive results among recently developed regularization methods called consistency regularization. VAT utilizes adversarial samples, generated by injecting perturbation in the input space, for training and thereby enhances the generalization ability of a classifier. However, such adversarial samples can be generated only within a very small area around the input data point, which limits the adversarial effectiveness of such samples. To address this problem we propose LVAT (Latent space VAT), which injects perturbation in the latent space instead of the input space. LVAT can generate adversarial samples flexibly, resulting in more adverse effects and thus more effective regularization. The latent space is built by a generative model, and in this paper, we examine two different type of models: variational auto-encoder and normalizing flow, specifically Glow. We evaluated the performance of our method in both supervised and semi-supervised learning scenarios for an image classification task using SVHN and CIFAR-10 datasets. In our evaluation, we found that our method outperforms VAT and other state-of-the-art methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源