论文标题
单夜数据的个性化自动睡眠分期:一项具有KL-Divergence正则化的试点研究
Personalized Automatic Sleep Staging with Single-Night Data: a Pilot Study with KL-Divergence Regularization
论文作者
论文摘要
人之间的脑波各不相同。改善自动睡眠分期进行纵向睡眠监测的一种明显方法是基于从数据的第一晚中提取的个体特征对算法的个性化化。由于一个夜晚是训练睡眠阶段模型的非常少量的数据,因此我们建议使用Kullback-Leibler(KL)差异正规化转移学习方法来解决此问题。我们使用预验证的seqsleepnet(即主题独立模型)作为起点,并使用单夜个性化数据来列出它来得出个性化模型。这是通过将主题独立模型的输出与个性化模型的输出之间添加KL差异来完成的。实际上,KL-Divergence正则化阻止了个性化模型过度适合单夜数据,并且离主题独立模型太远了。 Sheep-EDF扩展数据库的实验结果,有75位受试者表明,在拟议的KL-Divergence正则化的帮助下,可以使用单个晚上数据的睡眠分期个性化。平均而言,我们达到了一个个性化的睡眠期次准确性,为79.6%,科恩的kappa为0.706,宏F1分数为73.0%,灵敏度为71.8%,特异性为94.2%。我们发现,这种方法可抵抗过度拟合,并且与非正规化相比,它与非人性化相比,它的准确性提高了4.5个百分点,而2.2个百分点相比。
Brain waves vary between people. An obvious way to improve automatic sleep staging for longitudinal sleep monitoring is personalization of algorithms based on individual characteristics extracted from the first night of data. As a single night is a very small amount of data to train a sleep staging model, we propose a Kullback-Leibler (KL) divergence regularized transfer learning approach to address this problem. We employ the pretrained SeqSleepNet (i.e. the subject independent model) as a starting point and finetune it with the single-night personalization data to derive the personalized model. This is done by adding the KL divergence between the output of the subject independent model and the output of the personalized model to the loss function during finetuning. In effect, KL-divergence regularization prevents the personalized model from overfitting to the single-night data and straying too far away from the subject independent model. Experimental results on the Sleep-EDF Expanded database with 75 subjects show that sleep staging personalization with a single-night data is possible with help of the proposed KL-divergence regularization. On average, we achieve a personalized sleep staging accuracy of 79.6%, a Cohen's kappa of 0.706, a macro F1-score of 73.0%, a sensitivity of 71.8%, and a specificity of 94.2%. We find both that the approach is robust against overfitting and that it improves the accuracy by 4.5 percentage points compared to non-personalization and 2.2 percentage points compared to personalization without regularization.