论文标题

通过自我调整算术编码近乎感染的神经语言隐肌

Near-imperceptible Neural Linguistic Steganography via Self-Adjusting Arithmetic Coding

论文作者

Shen, Jiaming, Ji, Heng, Han, Jiawei

论文摘要

语言隐志研究如何在自然语言中隐藏秘密信息涵盖文本。传统方法旨在通过词汇替代或句法修改将秘密信息转换为无辜文本。最近,神经语言模型(LMS)的进步使我们能够直接生成以秘密消息为条件的封面文本。在这项研究中,我们提出了一种新的语言隐志方法,该方法使用基于神经语言模型的自我调整算术编码编码秘密消息。我们正式分析了该方法的统计不可识别性,并从经验上表明,在四个数据集上的先前最新方法的表现分别为15.3%和38.9%。最后,人类评估表明,有51%的生成的封面文本确实可以欺骗窃听者。

Linguistic steganography studies how to hide secret messages in natural language cover texts. Traditional methods aim to transform a secret message into an innocent text via lexical substitution or syntactical modification. Recently, advances in neural language models (LMs) enable us to directly generate cover text conditioned on the secret message. In this study, we present a new linguistic steganography method which encodes secret messages using self-adjusting arithmetic coding based on a neural language model. We formally analyze the statistical imperceptibility of this method and empirically show it outperforms the previous state-of-the-art methods on four datasets by 15.3% and 38.9% in terms of bits/word and KL metrics, respectively. Finally, human evaluations show that 51% of generated cover texts can indeed fool eavesdroppers.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源