论文标题

使用离散的普通微分方程来表征的真实函数多项式时间可计算函数

Polynomial time computable functions over the reals characterized using discrete ordinary differential equations

论文作者

Blanc, Manon, Bournez, Olivier

论文摘要

最近使用离散的普通微分方程(ODE)(也称为有限差异)表征了从整数到可计算的整数的函数类别。在普通微分方程的框架中,试图扩展对实数的函数类别的方法是非常自然的,而不仅仅是在整数上。最近,从整数到真实的函数获得了先前的表征的扩展,但是基于从整数到合适的离散真实集的连续函数的存在,在证明中使用的方法不能扩展到实际功能到真实的函数,因为由于明确的拓扑原因,这种函数不能存在。在本文中,我们证明这确实可以提供从真实到真实功能的函数的优雅而简单的代数表征:我们提供了包含一些基本功能的最小函数函数的表征,并且通过组合,线性长度ODES和自然有效限制架构封闭。这是根据基于递归定义的特定合适功能的构建和一种barycentric方法来获得的替代证明技术。此外,我们还将以前的特征扩展到多个方向:首先,我们证明不需要乘法。我们证明了正常的形式定理,具有与形式神经网络有关的良好副作用。实际上,考虑到一些固定的误差和某些多项式时间t(n),我们的设置有效地产生了一些神经网络,该神经网络以给定精度在其域上计算函数,对于任何t(n) - 多种单位时间可计算函数f。

The class of functions from the integers to the integers computable in polynomial time has been characterized recently using discrete ordinary differential equations (ODE), also known as finite differences. In the framework of ordinary differential equations, this is very natural to try to extend the approach to classes of functions over the reals, and not only over the integers. Recently, an extension of previous characterization was obtained for functions from the integers to the reals, but the method used in the proof, based on the existence of a continuous function from the integers to a suitable discrete set of reals, cannot extend to functions from the reals to the reals, as such a function cannot exist for clear topological reasons. In this article, we prove that this is indeed possible to provide an elegant and simple algebraic characterization of functions from the reals to the reals: we provide a characterization of such functions as the smallest class of functions that contains some basic functions, and that is closed by composition, linear length ODEs, and a natural effective limit schema. This is obtained using an alternative proof technique based on the construction of specific suitable functions defined recursively, and a barycentric method. Furthermore, we also extend previous characterizations in several directions: First, we prove that there is no need of multiplication. We prove a normal form theorem, with a nice side effect related to formal neural networks. Indeed, given some fixed error and some polynomial time t(n), our settings produce effectively some neural network that computes the function over its domain with the given precision, for any t(n)-polynomial time computable function f .

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源