论文标题

量身定制的最大网络用于学习凸PWQ功能

Tailored max-out networks for learning convex PWQ functions

论文作者

Teichrib, Dieter, Darup, Moritz Schulze

论文摘要

凸形分段二次(PWQ)函数经常出现在控制和其他地方。例如,众所周知,线性MPC的最佳值函数(OVF)以及Q-功能是凸PWQ函数。现在,在基于学习的控制中,这些功能通常在人工神经网络(NN)的帮助下表示。在这种情况下,一个反复出现的问题是如何从深度,宽度和激活方面选择NN的拓扑,以实现有效的学习。这个问题的优雅答案可能是拓扑结构,原则上可以准确地描述要学习的功能。此类解决方案已经用于相关问题。实际上,合适的拓扑以分段仿射(PWA)函数而闻名,例如,可以反映线性MPC中的最佳控制定律。遵循这个方向,我们在本文中表明,只有一个隐藏层和两个神经元,Max-Out-NN可以精确地描述凸PWQ函数。

Convex piecewise quadratic (PWQ) functions frequently appear in control and elsewhere. For instance, it is well-known that the optimal value function (OVF) as well as Q-functions for linear MPC are convex PWQ functions. Now, in learning-based control, these functions are often represented with the help of artificial neural networks (NN). In this context, a recurring question is how to choose the topology of the NN in terms of depth, width, and activations in order to enable efficient learning. An elegant answer to that question could be a topology that, in principle, allows to exactly describe the function to be learned. Such solutions are already available for related problems. In fact, suitable topologies are known for piecewise affine (PWA) functions that can, for example, reflect the optimal control law in linear MPC. Following this direction, we show in this paper that convex PWQ functions can be exactly described by max-out-NN with only one hidden layer and two neurons.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源