论文标题

达加姆:基于主题的独立脑电图识别的域对抗性图模型

DAGAM: A Domain Adversarial Graph Attention Model for Subject Independent EEG-Based Emotion Recognition

论文作者

Xu, Tao, Dang, Wang, Wang, Jiabao, Zhou, Yun

论文摘要

基于EEG的情绪识别的最重要的挑战之一是跨主体EEG的变化,导致性能和普遍性差。本文提出了一种基于EEG的新型情感识别模型,称为“域对抗图注意模型”(DAGAM)。基本思想是生成图形,以使用生物拓扑来对多通道EEG信号进行建模。图理论可以拓扑描述和分析脑电图之间的关系和相互依赖性。然后,与其他图形卷积网络不同,自我发项池池用于使图形显着的EEG特征提取,从而有效地提高了性能。最后,在绘制图池之后,采用基于图的域对抗性来识别和处理跨受试者的脑电图变异,从而有效地达到了良好的概括性。我们对两个基准数据集(种子和种子IV)进行了广泛的评估,并获得了无关的情绪识别的最先进结果。我们的模型将种子准确性提高到92.59%(提高4.69%),标准偏差最低3.21%(降低2.92%)和种子IV准确性至80.74%(6.90%提高),分别为4.14%(3.88%降低)。

One of the most significant challenges of EEG-based emotion recognition is the cross-subject EEG variations, leading to poor performance and generalizability. This paper proposes a novel EEG-based emotion recognition model called the domain adversarial graph attention model (DAGAM). The basic idea is to generate a graph to model multichannel EEG signals using biological topology. Graph theory can topologically describe and analyze relationships and mutual dependency between channels of EEG. Then, unlike other graph convolutional networks, self-attention pooling is applied to benefit salient EEG feature extraction from the graph, which effectively improves the performance. Finally, after graph pooling, the domain adversarial based on the graph is employed to identify and handle EEG variation across subjects, efficiently reaching good generalizability. We conduct extensive evaluations on two benchmark datasets (SEED and SEED IV) and obtain state-of-the-art results in subject-independent emotion recognition. Our model boosts the SEED accuracy to 92.59% (4.69% improvement) with the lowest standard deviation of 3.21% (2.92% decrements) and SEED IV accuracy to 80.74% (6.90% improvement) with the lowest standard deviation of 4.14% (3.88% decrements) respectively.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源