论文标题
多变量时间序列分类的强大增强
Robust Augmentation for Multivariate Time Series Classification
论文作者
论文摘要
神经网络能够学习数据的强大表示,但是由于参数的数量,它们容易过度拟合。这在时间序列分类的领域中尤其具有挑战性,数据集可能包含不到100个培训示例。在本文中,我们表明,切割,cutmix,混合和窗帘的简单方法以一种具有统计意义的方式来改善稳健性和整体性能,以进行卷积,经常性和基于自我注意力的架构,以进行时间序列分类。我们在东安格利大学多元时间序列分类(UEA MTSC)档案中评估了这些方法的26个数据集,并分析了这些方法在不同类型的时间序列数据上的执行方式。我们表明,与没有增强相比,在18个不同数据集中,具有增强的启动时间网络在18个不同的数据集中的精度提高了1%至45%。我们还表明,增强提高了基于自我注意力的建筑的准确性。
Neural networks are capable of learning powerful representations of data, but they are susceptible to overfitting due to the number of parameters. This is particularly challenging in the domain of time series classification, where datasets may contain fewer than 100 training examples. In this paper, we show that the simple methods of cutout, cutmix, mixup, and window warp improve the robustness and overall performance in a statistically significant way for convolutional, recurrent, and self-attention based architectures for time series classification. We evaluate these methods on 26 datasets from the University of East Anglia Multivariate Time Series Classification (UEA MTSC) archive and analyze how these methods perform on different types of time series data.. We show that the InceptionTime network with augmentation improves accuracy by 1% to 45% in 18 different datasets compared to without augmentation. We also show that augmentation improves accuracy for recurrent and self attention based architectures.