论文标题

培训基于神经网络的LDPC解码器的食谱

A recipe of training neural network-based LDPC decoders

论文作者

Li, Guangwen, Yu, Xiao

论文摘要

已知的信念信念传播解码变体在将不同的权重分配给消息传递边缘后,可以轻松地将其作为神经网络放置,因为神经网络灵活地传递。在本文中,我们关注如何在深度学习框架内以可训练的参数的形式确定这些权重。首先,提出了一种新方法来生成高质量的训练数据,通过利用靶向混合物密度的近似值来生成高质量的训练数据。然后,在追踪训练演化曲线后,训练损失与解码指标之间的强正相关已完全暴露。最后,为了促进训练收敛并降低解码的复杂性,我们强调了削减可训练参数的数量的必要性,同时强调这些幸存的参数的位置,这在广泛的模拟中是合理的。

It is known belief propagation decoding variants of LDPC codes can be unrolled easily as neural networks after assigning differed weights to message passing edges flexibly. In this paper we focus on how to determine these weights, in the form of trainable paramters, within a framework of deep learning. Firstly, a new method is proposed to generate high-quality training data via exploiting an approximation to the targeted mixture density. Then the strong positive correlation between training loss and decoding metrics is fully exposed after tracing the training evolution curves. Lastly, for the purpose of facilitating training convergence and reducing decoding complexity, we highlight the necessity of slashing the number of trainable parameters while emphasizing the locations of these survived ones, which is justified in the extensive simulation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源