论文标题
部分可观测时空混沌系统的无模型预测
Tensions Between the Proxies of Human Values in AI
论文作者
论文摘要
通过减轻技术的潜在有害影响的动机,AI社区已经为某些问责制的支柱制定并接受了数学定义:例如隐私,公平和模型透明度。然而,我们认为这是从根本上误导的,因为这些定义是不完美的,对他们希望代理的人类价值观的孤立的结构,同时伪造的,即这些价值已经充分嵌入了我们的技术中。在普遍的方法下,当从业者试图以孤立或同时达到公平,隐私和透明度的每个支柱时,就会出现紧张局势。在这个位置论文中,我们推动重定向。我们认为,AI社区需要考虑选择这些支柱某些表述的所有后果,而不仅仅是技术不兼容,而且在部署的背景下的影响。我们指出了针对后者的框架的社会技术研究,但在实践中采取更广泛的努力。
Motivated by mitigating potentially harmful impacts of technologies, the AI community has formulated and accepted mathematical definitions for certain pillars of accountability: e.g. privacy, fairness, and model transparency. Yet, we argue this is fundamentally misguided because these definitions are imperfect, siloed constructions of the human values they hope to proxy, while giving the guise that those values are sufficiently embedded in our technologies. Under popularized methods, tensions arise when practitioners attempt to achieve each pillar of fairness, privacy, and transparency in isolation or simultaneously. In this position paper, we push for redirection. We argue that the AI community needs to consider all the consequences of choosing certain formulations of these pillars -- not just the technical incompatibilities, but also the effects within the context of deployment. We point towards sociotechnical research for frameworks for the latter, but push for broader efforts into implementing these in practice.