论文标题
通过学习正交分解表示的公平性
Fairness by Learning Orthogonal Disentangled Representations
论文作者
论文摘要
学习判别强大的表示是机器学习系统的关键步骤。在特定任务上表现良好的同时,对任意滋扰或敏感属性引入不变性是表示学习的重要问题。这主要是通过清除学习表示的敏感信息来解决的。在本文中,我们提出了一种新型的分解方法来不变的表示问题。我们通过执行正交性约束作为独立性的代理来解散有意义和敏感的表示。我们通过熵最大化明确地强制执行有意义的表示对敏感信息不可知。在五个可公开可用的数据集上评估了所提出的方法,并将其与学习公平和不变性的最新方法进行了比较,从而在三个数据集上实现了最先进的性能,其余的性能可比性。此外,我们进行了一项烧蚀研究,以评估每个组件的效果。
Learning discriminative powerful representations is a crucial step for machine learning systems. Introducing invariance against arbitrary nuisance or sensitive attributes while performing well on specific tasks is an important problem in representation learning. This is mostly approached by purging the sensitive information from learned representations. In this paper, we propose a novel disentanglement approach to invariant representation problem. We disentangle the meaningful and sensitive representations by enforcing orthogonality constraints as a proxy for independence. We explicitly enforce the meaningful representation to be agnostic to sensitive information by entropy maximization. The proposed approach is evaluated on five publicly available datasets and compared with state of the art methods for learning fairness and invariance achieving the state of the art performance on three datasets and comparable performance on the rest. Further, we perform an ablative study to evaluate the effect of each component.