论文标题
修剪和量化神经信念传播解码器
Pruning and Quantizing Neural Belief Propagation Decoders
论文作者
论文摘要
我们考虑短线性块代码的接近最大样品(ML)解码。特别是,我们提出了一种基于Nachmani等人最近引入的基于神经信念传播(NBP)解码的新型解码方法。在其中,我们允许在算法的每种迭代中使用不同的奇偶校验检查矩阵。关键的想法是考虑在过度统一的奇偶校验检查矩阵上进行NBP解码,并使用NBP的权重来衡量检查节点(CNS)对解码的重要性。然后将不重要的中枢神经系统修剪。与在给定的固定平价检查矩阵上执行解码的NBP相反,拟议的基于修剪的神经信念传播(PB-NBP)通常会导致每种迭代中不同的奇偶校验检查矩阵。对于在CN评估方面给定的复杂性,我们表明PB-NBP在NBP方面会产生显着的性能提高。我们将提出的解码器应用于芦苇毛刺代码的解码,短密度奇偶校验检查(LDPC)代码和极性代码。 PB-NBP的表现优于NBP在超出奇偶校验检查矩阵上的解码为0.27-0.31 dB,同时将所需的CN评估的数量减少高达97%。对于LDPC代码,PB-NBP的表现优于常规的信念传播,而CN评估数量相同0.52 dB。我们进一步扩展了修剪概念以抵消Min-sum解码,并引入基于修剪的神经偏移量Min-sum(PB-NOMS)解码器,为此,我们共同优化了偏移量和消息和偏移的量化。我们通过5位量化的Reed-Muller代码的ML解码为0.5 dB。
We consider near maximum-likelihood (ML) decoding of short linear block codes. In particular, we propose a novel decoding approach based on neural belief propagation (NBP) decoding recently introduced by Nachmani et al. in which we allow a different parity-check matrix in each iteration of the algorithm. The key idea is to consider NBP decoding over an overcomplete parity-check matrix and use the weights of NBP as a measure of the importance of the check nodes (CNs) to decoding. The unimportant CNs are then pruned. In contrast to NBP, which performs decoding on a given fixed parity-check matrix, the proposed pruning-based neural belief propagation (PB-NBP) typically results in a different parity-check matrix in each iteration. For a given complexity in terms of CN evaluations, we show that PB-NBP yields significant performance improvements with respect to NBP. We apply the proposed decoder to the decoding of a Reed-Muller code, a short low-density parity-check (LDPC) code, and a polar code. PB-NBP outperforms NBP decoding over an overcomplete parity-check matrix by 0.27-0.31 dB while reducing the number of required CN evaluations by up to 97%. For the LDPC code, PB-NBP outperforms conventional belief propagation with the same number of CN evaluations by 0.52 dB. We further extend the pruning concept to offset min-sum decoding and introduce a pruning-based neural offset min-sum (PB-NOMS) decoder, for which we jointly optimize the offsets and the quantization of the messages and offsets. We demonstrate performance 0.5 dB from ML decoding with 5-bit quantization for the Reed-Muller code.