论文标题
积极的公平而不是不知情
Active Fairness Instead of Unawareness
论文作者
论文摘要
在研究和社会中,广泛讨论了AI系统可以通过重现和强制数据促进歧视的风险来促进歧视。许多当前的法律标准要求从数据中删除敏感属性,以使“通过不认识达到公平性”。我们认为,在大数据的时代,这种方法已过时,在大数据的时代,具有高度相关属性的大数据集很常见。相反,我们提出主动使用敏感属性,目的是观察和控制任何形式的歧视,从而导致公平的结果。
The possible risk that AI systems could promote discrimination by reproducing and enforcing unwanted bias in data has been broadly discussed in research and society. Many current legal standards demand to remove sensitive attributes from data in order to achieve "fairness through unawareness". We argue that this approach is obsolete in the era of big data where large datasets with highly correlated attributes are common. In the contrary, we propose the active use of sensitive attributes with the purpose of observing and controlling any kind of discrimination, and thus leading to fair results.