论文标题

使用经典解码器优化连续串联的神经代码

Optimizing Serially Concatenated Neural Codes with Classical Decoders

论文作者

Clausius, Jannis, Geiselhart, Marvin, Brink, Stephan ten

论文摘要

为了改善短长代码,我们证明了经典解码器也可以与实价,神经编码器(即基于深度学习的代码字序列发生器)一起使用。在这里,经典解码器可以是一种有价值的工具,可以洞悉这些神经代码并阐明弱点。具体而言,Turbo-AutoEncoder是一种最近开发的通道编码方案,在该方案中,编码器和解码器都被神经网络取代。我们首先表明,基于卷积神经网络(CNN)代码的有限接受领域使BCJR算法的应用可以以可行的计算复杂性最佳地解码它们。这些最大的后验(MAP)组件解码器将用于并行或串行串联的CNN编码器形成经典(迭代)涡轮解码器,并提供了接近最大的最大可能性(ML)解码。据我们所知,这是第一次将经典的解码算法应用于非平凡的,实现的神经代码。此外,由于BCJR算法是完全可区分的,因此可以以端到端的方式训练或微调神经编码器。

For improving short-length codes, we demonstrate that classic decoders can also be used with real-valued, neural encoders, i.e., deep-learning based codeword sequence generators. Here, the classical decoder can be a valuable tool to gain insights into these neural codes and shed light on weaknesses. Specifically, the turbo-autoencoder is a recently developed channel coding scheme where both encoder and decoder are replaced by neural networks. We first show that the limited receptive field of convolutional neural network (CNN)-based codes enables the application of the BCJR algorithm to optimally decode them with feasible computational complexity. These maximum a posteriori (MAP) component decoders then are used to form classical (iterative) turbo decoders for parallel or serially concatenated CNN encoders, offering a close-to-maximum likelihood (ML) decoding of the learned codes. To the best of our knowledge, this is the first time that a classical decoding algorithm is applied to a non-trivial, real-valued neural code. Furthermore, as the BCJR algorithm is fully differentiable, it is possible to train, or fine-tune, the neural encoder in an end-to-end fashion.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源