论文标题

视觉变压器是参数有效的视听学习者

Vision Transformers are Parameter-Efficient Audio-Visual Learners

论文作者

Lin, Yan-Bo, Sung, Yi-Lin, Lei, Jie, Bansal, Mohit, Bertasius, Gedas

论文摘要

在过去几年中,视觉变压器(VIT)在各种计算机视觉任务上取得了令人印象深刻的结果。在这项工作中,我们研究了仅在视觉数据上预定的冷冻VIT的能力,以推广到视听数据,而无需对其任何原始参数进行鉴定。为此,我们提出了一个潜在的视听混合动力车(奢侈)适配器,该适配器通过将少量可训练的参数注入冷冻VOT的每一层,从而适应了预验证的VITS视听任务。为了有效地融合视觉和音频提示,我们的豪华适配器使用了一小部分潜在令牌,从而形成了注意力瓶颈,从而消除了标准交叉注意的二次成本。与现有的特定方式视听方法相比,我们的方法在各种视听任务上实现了竞争性甚至更好的性能,同时使用较少的可调参数,而无需依赖昂贵的音频预处理或外部音频编码器。我们的代码可从https://genjib.github.io/project_page/lavish/获得

Vision transformers (ViTs) have achieved impressive results on various computer vision tasks in the last several years. In this work, we study the capability of frozen ViTs, pretrained only on visual data, to generalize to audio-visual data without finetuning any of its original parameters. To do so, we propose a latent audio-visual hybrid (LAVISH) adapter that adapts pretrained ViTs to audio-visual tasks by injecting a small number of trainable parameters into every layer of a frozen ViT. To efficiently fuse visual and audio cues, our LAVISH adapter uses a small set of latent tokens, which form an attention bottleneck, thus, eliminating the quadratic cost of standard cross-attention. Compared to the existing modality-specific audio-visual methods, our approach achieves competitive or even better performance on various audio-visual tasks while using fewer tunable parameters and without relying on costly audio pretraining or external audio encoders. Our code is available at https://genjib.github.io/project_page/LAVISH/

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源