论文标题
奖励建模用于减轻基于变压器的语言模型中的毒性
Reward Modeling for Mitigating Toxicity in Transformer-based Language Models
论文作者
论文摘要
基于变压器的语言模型能够生成流利的文本,并在各种自然语言生成任务中有效地适应。但是,已证明在大型未标记的网络文本语料库中鉴定的语言模型已被证明会遭受堕落的有毒内容和社会偏见行为的损害,从而阻碍了他们的安全部署。提出了各种排毒方法来减轻语言模型的毒性;但是,这些方法是在包含与性别,种族或宗教有关的特定社会身份的提示条件下进行排毒语言模型的。在这项研究中,我们提出了增强氧化。一种基于强化学习的方法,用于降低语言模型中的毒性。我们应对语言模型中的安全挑战,并提出了一种新的奖励模型,该模型能够检测有毒内容并减轻对毒性预测中社会身份的意外偏见。该实验表明,用于语言模型排毒的增强方法化方法优于自动评估指标中现有的排毒方法,这表明我们在语言模型排毒中的方法能力和对生成内容中社会认同的意外偏见的能力较小。
Transformer-based language models are able to generate fluent text and be efficiently adapted across various natural language generation tasks. However, language models that are pretrained on large unlabeled web text corpora have been shown to suffer from degenerating toxic content and social bias behaviors, consequently hindering their safe deployment. Various detoxification methods were proposed to mitigate the language model's toxicity; however, these methods struggled to detoxify language models when conditioned on prompts that contain specific social identities related to gender, race, or religion. In this study, we propose Reinforce-Detoxify; A reinforcement learning-based method for mitigating toxicity in language models. We address the challenge of safety in language models and propose a new reward model that is able to detect toxic content and mitigate unintended bias towards social identities in toxicity prediction. The experiments demonstrate that the Reinforce-Detoxify method for language model detoxification outperforms existing detoxification approaches in automatic evaluation metrics, indicating the ability of our approach in language model detoxification and less prone to unintended bias toward social identities in generated content.