论文标题

Lyanet:Lyapunov训练神经odes的框架

LyaNet: A Lyapunov Framework for Training Neural ODEs

论文作者

Rodriguez, Ivan Dario Jimenez, Ames, Aaron D., Yue, Yisong

论文摘要

我们提出了一种通过使用控制理论Lyapunov条件进行稳定的方法来训练普通微分方程的方法。我们的方法称为Lyanet,是基于一种新颖的Lyapunov损失公式,该公式鼓励推理动力学迅速转化为正确的预测。从理论上讲,我们表明,最小化Lyapunov损失可以确保指数融合到正确的解决方案,并实现了新颖的鲁棒性保证。我们还提供了实用的算法,其中包括避免通过求解器或使用伴随方法反向传播成本的成本。相对于标准的神经训练,我们从经验上发现,Lyanet可以提供改进的预测性能,更快的推理动力收敛性以及改善的对抗性鲁棒性。我们的代码可在https://github.com/ivandariojr/lyapunovlearning上找到。

We propose a method for training ordinary differential equations by using a control-theoretic Lyapunov condition for stability. Our approach, called LyaNet, is based on a novel Lyapunov loss formulation that encourages the inference dynamics to converge quickly to the correct prediction. Theoretically, we show that minimizing Lyapunov loss guarantees exponential convergence to the correct solution and enables a novel robustness guarantee. We also provide practical algorithms, including one that avoids the cost of backpropagating through a solver or using the adjoint method. Relative to standard Neural ODE training, we empirically find that LyaNet can offer improved prediction performance, faster convergence of inference dynamics, and improved adversarial robustness. Our code available at https://github.com/ivandariojr/LyapunovLearning .

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源