论文标题

在变异自动编码器中使用神经颂歌的生成时间序列模型

Generative time series models using Neural ODE in Variational Autoencoders

论文作者

Garsdal, M. L., Søgaard, V., Sørensen, S. M.

论文摘要

在本文中,我们在变异自动编码器设置中实现神经普通微分方程,以进行生成时间序列建模。采用了针对对象的代码方法,以允许更轻松的开发和研究,本文中使用的所有代码可以在此处找到:https://github.com/simonmoesorensen/neural-ode-project 与基线长期记忆自动编码器相比,结果最初是重新创建的,并进行了重建。然后使用LSTM编码器扩展该模型,并以弹簧振荡的形式由时间序列组成的更复杂的数据挑战。该模型表现出希望,并能够重建与基线模型相比,RMSE的所有复杂性的真实轨迹。但是,它能够捕获解码器中已知数据的时间序列的动态行为,但对于任何弹簧数据的复杂性,在真实轨迹之后都无法产生外推。进行了最终实验,在该实验中还向模型提供了68天的太阳能生产数据,即使很少有数据可用,也能够像基线一样重建基线。 最后,将模型训练时间与基线进行了比较。已经发现,对于少量数据,训练时节点方法的速度明显慢,而对于较大的数据,节点方法在训练时的方法将相等或更快。 本文以未来的工作部分结束,该部分描述了本文介绍的工作的许多自然扩展,例如进一步研究了输入数据的重要性,包括在基线模型中推断或测试更具体的模型设置。

In this paper, we implement Neural Ordinary Differential Equations in a Variational Autoencoder setting for generative time series modeling. An object-oriented approach to the code was taken to allow for easier development and research and all code used in the paper can be found here: https://github.com/simonmoesorensen/neural-ode-project The results were initially recreated and the reconstructions compared to a baseline Long-Short Term Memory AutoEncoder. The model was then extended with a LSTM encoder and challenged by more complex data consisting of time series in the form of spring oscillations. The model showed promise, and was able to reconstruct true trajectories for all complexities of data with a smaller RMSE than the baseline model. However, it was able to capture the dynamic behavior of the time series for known data in the decoder but was not able to produce extrapolations following the true trajectory very well for any of the complexities of spring data. A final experiment was carried out where the model was also presented with 68 days of solar power production data, and was able to reconstruct just as well as the baseline, even when very little data is available. Finally, the models training time was compared to the baseline. It was found that for small amounts of data the NODE method was significantly slower at training than the baseline, while for larger amounts of data the NODE method would be equal or faster at training. The paper is ended with a future work section which describes the many natural extensions to the work presented in this paper, with examples being investigating further the importance of input data, including extrapolation in the baseline model or testing more specific model setups.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源