论文标题
关于COVID-19检测的呼吸和语音信号的可解释的声学学习
Interpretable Acoustic Representation Learning on Breathing and Speech Signals for COVID-19 Detection
论文作者
论文摘要
在本文中,我们描述了一种表示音频信号的表示方法,以实现COVID-19检测任务。将原始音频样品用1-D卷积过滤器进行处理,这些过滤器被参数化为余弦调制的高斯函数。这些内核的选择允许将滤纸解释为光滑的带通滤波器。过滤后的输出合并,对数压缩并用于基于自我注意的相关加权机制。相关权重强调了时间频分解的关键区域,这对于下游任务很重要。该模型的后续层由复发架构组成,模型经过训练,以执行COVID-19检测任务。在我们对CoSwara数据集的实验中,我们表明,所提出的模型在基线系统以及其他表示学习方法上实现了显着的性能改进。此外,提出的方法显示出统一适用于语音和呼吸信号以及从较大数据集中转移学习。
In this paper, we describe an approach for representation learning of audio signals for the task of COVID-19 detection. The raw audio samples are processed with a bank of 1-D convolutional filters that are parameterized as cosine modulated Gaussian functions. The choice of these kernels allows the interpretation of the filterbanks as smooth band-pass filters. The filtered outputs are pooled, log-compressed and used in a self-attention based relevance weighting mechanism. The relevance weighting emphasizes the key regions of the time-frequency decomposition that are important for the downstream task. The subsequent layers of the model consist of a recurrent architecture and the models are trained for a COVID-19 detection task. In our experiments on the Coswara data set, we show that the proposed model achieves significant performance improvements over the baseline system as well as other representation learning approaches. Further, the approach proposed is shown to be uniformly applicable for speech and breathing signals and for transfer learning from a larger data set.