论文标题
Infomax神经源源通道通过对抗位翻转编码
Infomax Neural Joint Source-Channel Coding via Adversarial Bit Flip
论文作者
论文摘要
尽管香农理论指出,将源和频道编码分为两个独立过程是渐近的最佳选择,但在许多实际的通信方案中,这种分解受到解码的有限比特长度和计算能力的限制。最近,提出了神经联合源通道编码(NECST)来避开此问题。尽管它利用了摊销推理和深度学习的进步来改善编码和解码过程,但由于其学习的编码网络的鲁棒性有限,它仍然不能总是在压缩和误差校正性能方面取得令人信服的结果。在本文中,由神经关节源通道编码和离散表示学习之间的固有联系所激发,我们提出了一种新颖的正则化方法,称为Infomax fermansial-Bit-Flip(IABF),以改善神经源源通道编码方案的稳定性和鲁棒性。更具体地说,在编码器方面,我们建议明确地最大化代码字与数据之间的相互信息。在解码器侧时,摊销的重建是在对抗框架内正规化的。在各种现实世界数据集上进行的广泛实验证明,我们的IABF可以在压缩和误差校正基准上实现最新的性能,并以明显的边距优于基准。
Although Shannon theory states that it is asymptotically optimal to separate the source and channel coding as two independent processes, in many practical communication scenarios this decomposition is limited by the finite bit-length and computational power for decoding. Recently, neural joint source-channel coding (NECST) is proposed to sidestep this problem. While it leverages the advancements of amortized inference and deep learning to improve the encoding and decoding process, it still cannot always achieve compelling results in terms of compression and error correction performance due to the limited robustness of its learned coding networks. In this paper, motivated by the inherent connections between neural joint source-channel coding and discrete representation learning, we propose a novel regularization method called Infomax Adversarial-Bit-Flip (IABF) to improve the stability and robustness of the neural joint source-channel coding scheme. More specifically, on the encoder side, we propose to explicitly maximize the mutual information between the codeword and data; while on the decoder side, the amortized reconstruction is regularized within an adversarial framework. Extensive experiments conducted on various real-world datasets evidence that our IABF can achieve state-of-the-art performances on both compression and error correction benchmarks and outperform the baselines by a significant margin.