论文标题

多任务变分信息瓶颈

Multi-Task Variational Information Bottleneck

论文作者

Qian, Weizhu, Chen, Bowei, Zhang, Yichao, Wen, Guanghui, Gechter, Franck

论文摘要

多任务学习(MTL)是机器学习和人工智能中的重要主题。它在计算机视觉,信号处理和语音识别中的应用无处不在。尽管该主题最近引起了很大的关注,但是现有模型对不同任务的性能和鲁棒性并没有很好地平衡。本文提出了一个基于变异信息瓶颈(VIB)架构的MTL模型,该模型可以为下游任务提供更有效的输入功能的潜在表示。对对抗性攻击下的三个公共数据集的广泛观察表明,所提出的模型与有关预测准确性的最新算法具有竞争力。实验结果表明,将VIB和任务依赖性的不确定性结合在一起是一种非常有效的方法,可以从输入功能中抽象有效信息以完成多个任务。

Multi-task learning (MTL) is an important subject in machine learning and artificial intelligence. Its applications to computer vision, signal processing, and speech recognition are ubiquitous. Although this subject has attracted considerable attention recently, the performance and robustness of the existing models to different tasks have not been well balanced. This article proposes an MTL model based on the architecture of the variational information bottleneck (VIB), which can provide a more effective latent representation of the input features for the downstream tasks. Extensive observations on three public data sets under adversarial attacks show that the proposed model is competitive to the state-of-the-art algorithms concerning the prediction accuracy. Experimental results suggest that combining the VIB and the task-dependent uncertainties is a very effective way to abstract valid information from the input features for accomplishing multiple tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源