论文标题

神经增强了因子图上的信念传播

Neural Enhanced Belief Propagation on Factor Graphs

论文作者

Satorras, Victor Garcia, Welling, Max

论文摘要

图形模型是局部依赖随机变量的结构化表示。推理这些随机变量的一种传统方法是使用信念传播执行推理。当提供真实的数据生成过程时,信念传播可以推断树结构化因子图中的最佳后验概率估计值。但是,在许多情况下,我们可能只能访问数据生成过程的近似值,或者我们可能会在因子图中面对循环,从而导致次优估计值。在这项工作中,我们首先将图形神经网络扩展到因子图(FG-GNN)。然后,我们提出了一个新的混合模型,该模型与信仰传播相连。 FG-GNN在每种推理迭代时都会从信念传播中接收到输入消息,并输出其校正版本。结果,我们获得了一种更准确的算法,该算法结合了信仰传播和图形神经网络的好处。我们将我们的想法应用于错误校正解码任务,我们表明我们的算法可以优于爆发通道上的LDPC代码的信念传播。

A graphical model is a structured representation of locally dependent random variables. A traditional method to reason over these random variables is to perform inference using belief propagation. When provided with the true data generating process, belief propagation can infer the optimal posterior probability estimates in tree structured factor graphs. However, in many cases we may only have access to a poor approximation of the data generating process, or we may face loops in the factor graph, leading to suboptimal estimates. In this work we first extend graph neural networks to factor graphs (FG-GNN). We then propose a new hybrid model that runs conjointly a FG-GNN with belief propagation. The FG-GNN receives as input messages from belief propagation at every inference iteration and outputs a corrected version of them. As a result, we obtain a more accurate algorithm that combines the benefits of both belief propagation and graph neural networks. We apply our ideas to error correction decoding tasks, and we show that our algorithm can outperform belief propagation for LDPC codes on bursty channels.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源