论文标题
端到端内核通过生成随机傅立叶功能学习
End-to-end Kernel Learning via Generative Random Fourier Features
论文作者
论文摘要
在光谱案例中,随机傅立叶功能(RFF)为内核学习提供了一种有希望的方法。当前基于RFF的内核学习方法通常以两阶段的方式工作。在第一阶段的过程中,学习最佳特征图通常是作为目标对准问题的表述,该问题旨在将学习的内核与预定义的目标核(通常是理想的内核)保持一致。在第二阶段的过程中,就映射的随机特征进行了线性学习者。然而,目标对齐中的预定义内核不一定对于线性学习者的概括是最佳的。取而代之的是,在本文中,我们考虑了一个一个阶段的过程,将内核学习和线性学习者纳入统一的框架。具体而言,设计通过RFF的生成网络被设计为隐式学习内核,然后是线性分类器参数化为完整连接的层。然后,通过解决经验风险最小化(ERM)问题以达到一个阶段解决方案,共同训练生成网络和分类器。这种端到端方案自然允许与多层结构相对应的更深的特征,并在现实世界分类任务中比经典两阶段的基于RFF的方法显示出了优越的概括性能。此外,受提出方法的随机重采样机制的启发,研究并通过实验验证了其增强的对抗性鲁棒性。
Random Fourier features (RFFs) provide a promising way for kernel learning in a spectral case. Current RFFs-based kernel learning methods usually work in a two-stage way. In the first-stage process, learning the optimal feature map is often formulated as a target alignment problem, which aims to align the learned kernel with the pre-defined target kernel (usually the ideal kernel). In the second-stage process, a linear learner is conducted with respect to the mapped random features. Nevertheless, the pre-defined kernel in target alignment is not necessarily optimal for the generalization of the linear learner. Instead, in this paper, we consider a one-stage process that incorporates the kernel learning and linear learner into a unifying framework. To be specific, a generative network via RFFs is devised to implicitly learn the kernel, followed by a linear classifier parameterized as a full-connected layer. Then the generative network and the classifier are jointly trained by solving the empirical risk minimization (ERM) problem to reach a one-stage solution. This end-to-end scheme naturally allows deeper features, in correspondence to a multi-layer structure, and shows superior generalization performance over the classical two-stage, RFFs-based methods in real-world classification tasks. Moreover, inspired by the randomized resampling mechanism of the proposed method, its enhanced adversarial robustness is investigated and experimentally verified.