论文标题
Xsleepnet:自动睡眠分期的多视图顺序模型
XSleepNet: Multi-View Sequential Model for Automatic Sleep Staging
论文作者
论文摘要
自动睡眠分期对于扩大睡眠评估和诊断至关重要,以服务数百万经验的睡眠剥夺和疾病,并在家庭环境中进行纵向睡眠监测。从原始的多聚会信号中学习及其派生的时频图像表示已普遍存在。但是,从多视图输入(例如,原始信号和时间频率图像)中学习进行睡眠阶段很困难,并且不太了解。这项工作提出了一个序列到序列的睡眠阶段模型Xsleepnet,该模型能够从原始信号和时间频率图像中学习联合表示。由于不同的观点可能会以不同的速度概括或过度拟合,因此训练了拟议的网络,以便根据其概括/过度拟合行为对每个观点的学习步伐进行调整。简而言之,当特定视图上的学习良好时会加速并在过度拟合时放慢速度。在培训课程期间,计算特定于视图的概括/过度拟合度量,并用于得出权重以从不同视图中融合梯度。结果,网络能够在联合特征中保留不同视图的表示能力,这些特征代表基础分布比单独的单独观点所学的更好。此外,Xsleepnet体系结构的主要目的是为了获得训练数据量的稳健性,并增加了输入视图之间的互补性。在五个不同大小的数据库上的实验结果表明,Xsleepnet始终优于单视图基线,并且具有简单的融合策略。最后,Xsleepnet还表现优于先前的睡眠分期方法,并改善了实验数据库的先前最新结果。
Automating sleep staging is vital to scale up sleep assessment and diagnosis to serve millions experiencing sleep deprivation and disorders and enable longitudinal sleep monitoring in home environments. Learning from raw polysomnography signals and their derived time-frequency image representations has been prevalent. However, learning from multi-view inputs (e.g., both the raw signals and the time-frequency images) for sleep staging is difficult and not well understood. This work proposes a sequence-to-sequence sleep staging model, XSleepNet, that is capable of learning a joint representation from both raw signals and time-frequency images. Since different views may generalize or overfit at different rates, the proposed network is trained such that the learning pace on each view is adapted based on their generalization/overfitting behavior. In simple terms, the learning on a particular view is speeded up when it is generalizing well and slowed down when it is overfitting. View-specific generalization/overfitting measures are computed on-the-fly during the training course and used to derive weights to blend the gradients from different views. As a result, the network is able to retain the representation power of different views in the joint features which represent the underlying distribution better than those learned by each individual view alone. Furthermore, the XSleepNet architecture is principally designed to gain robustness to the amount of training data and to increase the complementarity between the input views. Experimental results on five databases of different sizes show that XSleepNet consistently outperforms the single-view baselines and the multi-view baseline with a simple fusion strategy. Finally, XSleepNet also outperforms prior sleep staging methods and improves previous state-of-the-art results on the experimental databases.