论文标题

双重DEBIA:裁缝单词嵌入缓解性别偏见的嵌入

Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation

论文作者

Wang, Tianlu, Lin, Xi Victoria, Rajani, Nazneen Fatema, McCann, Bryan, Ordonez, Vicente, Xiong, Caiming

论文摘要

源自人类生成的Corpora的单词嵌入,继承了强烈的性别偏见,可以通过下游模型进一步扩大。一些通常采用的辩论方法,包括开创性的硬依次算法,应用后处理程序,将预训练的单词嵌入嵌入到一个分类正交到推断的性别子空间中。我们发现,语义不足的语料库规律性,例如嵌入一词捕获的单词频率对这些算法的性能产生负面影响。我们提出了一种简单但有效的技术,双重的硬毛曲,在推断和删除性别子空间之前,将嵌入一词的嵌入式净化针对此类语料库规律。三个偏差缓解基准的实验表明,我们的方法保留了预训练的单词嵌入的分布语义,同时将性别偏见降低到比以前的方法明显更大的程度。

Word embeddings derived from human-generated corpora inherit strong gender bias which can be further amplified by downstream models. Some commonly adopted debiasing approaches, including the seminal Hard Debias algorithm, apply post-processing procedures that project pre-trained word embeddings into a subspace orthogonal to an inferred gender subspace. We discover that semantic-agnostic corpus regularities such as word frequency captured by the word embeddings negatively impact the performance of these algorithms. We propose a simple but effective technique, Double Hard Debias, which purifies the word embeddings against such corpus regularities prior to inferring and removing the gender subspace. Experiments on three bias mitigation benchmarks show that our approach preserves the distributional semantics of the pre-trained word embeddings while reducing gender bias to a significantly larger degree than prior approaches.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源