论文标题
使用各种自动编码器从语音诱发的脑电图中学习主题不变的表示
Learning Subject-Invariant Representations from Speech-Evoked EEG Using Variational Autoencoders
论文作者
论文摘要
脑电图(EEG)是一种了解大脑如何处理语音的有力方法。为此目的,线性模型最近通过深层神经网络替换,并产生令人鼓舞的结果。在相关的脑电图分类字段中,表明明确建模主题不变特征可改善模型跨主题和福利分类精度的概括。在这项工作中,我们适应分解的分层变分自动化码编码器来利用同一刺激的平行脑电图记录。我们将脑电图建模为两个分离的潜在空间。受试者的准确性分别在受试者和内容潜在空间上分别达到98.96%和1.60%,而二进制内容分类实验的准确性分别达到51.51%和62.91%的准确性和62.91%。
The electroencephalogram (EEG) is a powerful method to understand how the brain processes speech. Linear models have recently been replaced for this purpose with deep neural networks and yield promising results. In related EEG classification fields, it is shown that explicitly modeling subject-invariant features improves generalization of models across subjects and benefits classification accuracy. In this work, we adapt factorized hierarchical variational autoencoders to exploit parallel EEG recordings of the same stimuli. We model EEG into two disentangled latent spaces. Subject accuracy reaches 98.96% and 1.60% on respectively the subject and content latent space, whereas binary content classification experiments reach an accuracy of 51.51% and 62.91% on respectively the subject and content latent space.