论文标题

自我资助算法的概括

A Generalization of Self-Improving Algorithms

论文作者

Cheng, Siu-Wing, Chiu, Man-Kwun, Jin, Kai, Wong, Man Ting

论文摘要

Ailon等。 [sicomp'11]提出了用于排序和Delaunay三角剖分(DT)的自我提议算法(当输入实例$ x_1,\ cdots,x_n $)遵循一些未知\ emph {product {prodent distribastion}。也就是说,$ x_i $来自固定的未知分布$ \ mathsf {d} _i $,而$ x_i $的s则独立绘制。在学习阶段花费$ o(n^{1+ \ varepsilon})$时间之后,随后的预期运行时间为$ o(((n+ h)/\ varepsilon)$,其中$ h \ in \ in \ in \ {h_ \ mathrm {s} $ h_ \ mathrm {dt} $分别是分类和DT输出的分布的熵。在本文中,我们允许在\ emph {group产品分配}下$ x_i $之间的依赖性。 $ [1,n] $分组的隐藏分区; $ k $ -th组中的$ x_i $是固定的同一隐藏变量$ u_k $的未知功能; $ u_k $是从未知的产品发行中得出的。当映射$ u_k $ to $ x_i $的函数非常好时,我们描述了用于分类和DT的自我提出算法。在使用$ O(\ MATHRM {poly}(n))$ - 时间训练阶段之后,我们实现了$ O(N + H_ \ Mathrm {s})$和$ O(Nα(N) + H_ \ Mathrm {dt})$预期的分类和DT的运行时间,分别是$ al(\ cdot)$α(\ cdot)$是nisverse ackermann nsverve ackermann。

Ailon et al. [SICOMP'11] proposed self-improving algorithms for sorting and Delaunay triangulation (DT) when the input instances $x_1,\cdots,x_n$ follow some unknown \emph{product distribution}. That is, $x_i$ comes from a fixed unknown distribution $\mathsf{D}_i$, and the $x_i$'s are drawn independently. After spending $O(n^{1+\varepsilon})$ time in a learning phase, the subsequent expected running time is $O((n+ H)/\varepsilon)$, where $H \in \{H_\mathrm{S},H_\mathrm{DT}\}$, and $H_\mathrm{S}$ and $H_\mathrm{DT}$ are the entropies of the distributions of the sorting and DT output, respectively. In this paper, we allow dependence among the $x_i$'s under the \emph{group product distribution}. There is a hidden partition of $[1,n]$ into groups; the $x_i$'s in the $k$-th group are fixed unknown functions of the same hidden variable $u_k$; and the $u_k$'s are drawn from an unknown product distribution. We describe self-improving algorithms for sorting and DT under this model when the functions that map $u_k$ to $x_i$'s are well-behaved. After an $O(\mathrm{poly}(n))$-time training phase, we achieve $O(n + H_\mathrm{S})$ and $O(nα(n) + H_\mathrm{DT})$ expected running times for sorting and DT, respectively, where $α(\cdot)$ is the inverse Ackermann function.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源