论文标题
在放松的调度程序下,效率保证并行增量算法
Efficiency Guarantees for Parallel Incremental Algorithms under Relaxed Schedulers
论文作者
论文摘要
图形处理和计算几何形状中的几个经典问题是通过增量算法解决的,这些算法将计算分为一系列作用于共享状态的小型任务,这些任务逐渐更新。 尽管此类算法的顺序变体通常指定应执行任务的固定(但有时是随机)的顺序,但并行化此类算法的标准方法是放宽此约束,以允许排序的并行执行。 Dijkstra的单源最短路径算法(SSSP)以及并行的Delaunay网格三角剖分的平行实现就是这种情况。 尽管许多软件框架以这种方式平行于增量计算,但仍然不太了解这种放松的订购方法是否仍然可以提供任何复杂性保证。 在本文中,我们解决了这个问题,并分析通过放松的调度程序并行的一系列增量算法提供的效率保证。 我们表明,对于诸如Delaunay网格三角剖分和按插入进行排序的算法,就允许的最大优先倒置而言,最大放松因子为$ K $的调度程序将引入$ O(log(log(n)poly(k)),$ n $ n $ n $ n是要执行的任务数量的最大浪费工作。 对于SSSP,我们表明额外工作是$ O(poly(k)d_ {max} / w_ {min}),其中$ d _ {\ max} $是两个节点之间的最大距离,而$ w_ {min} $是最小距离。在$ n \ gg k $的实用环境中,这表明放松的开销将超过放松调度程序的可扩展性。 在负面的一面,我们提供了下限,表明某些算法本质上会由于调度程序放松而固有地产生非平凡数量的浪费工作,即使对于相对良性的放松调度程序也是如此。
Several classic problems in graph processing and computational geometry are solved via incremental algorithms, which split computation into a series of small tasks acting on shared state, which gets updated progressively. While the sequential variant of such algorithms usually specifies a fixed (but sometimes random) order in which the tasks should be performed, a standard approach to parallelizing such algorithms is to relax this constraint to allow for out-of-order parallel execution. This is the case for parallel implementations of Dijkstra's single-source shortest-paths algorithm (SSSP), and for parallel Delaunay mesh triangulation. While many software frameworks parallelize incremental computation in this way, it is still not well understood whether this relaxed ordering approach can still provide any complexity guarantees. In this paper, we address this problem, and analyze the efficiency guarantees provided by a range of incremental algorithms when parallelized via relaxed schedulers. We show that, for algorithms such as Delaunay mesh triangulation and sorting by insertion, schedulers with a maximum relaxation factor of $k$ in terms of the maximum priority inversion allowed will introduce a maximum amount of wasted work of $O(log(n) poly (k) ), $ where $n$ is the number of tasks to be executed. For SSSP, we show that the additional work is $O(poly (k) d_{max} / w_{min}), $ where $d_{\max}$ is the maximum distance between two nodes, and $w_{min}$ is the minimum such distance. In practical settings where $n \gg k$, this suggests that the overheads of relaxation will be outweighed by the improved scalability of the relaxed scheduler. On the negative side, we provide lower bounds showing that certain algorithms will inherently incur a non-trivial amount of wasted work due to scheduler relaxation, even for relatively benign relaxed schedulers.