论文标题

AI Commons的悲剧

The Tragedy of the AI Commons

论文作者

LaCroix, Travis, Mohseni, Aydin

论文摘要

近年来,关于道德人工智能研究的政策和指南提案已激增。这些应该为共同利益指导AI的社会负责发展。但是,通常存在非同伴的激励措施(即不遵守此类政策和准则);而且,这些建议通常缺乏有效的机制来执行自己的规范性主张。刚刚描述的情况构成了社会困难。也就是说,没有人会有个人合作的情况,尽管相互合作将为所有参与者带来最佳结果。在本文中,我们使用随机进化游戏动力学在人工智能的道德发展的背景下为这一社会困境建模。这种形式主义使我们能够隔离可能介入的变量,从而为增加AI中众多利益相关者的合作提供了可行的建议。我们的结果表明,随机效应如何帮助使合作在这种情况下可行。他们认为,应在较小的群体中尝试为共同利益而进行协调,其中合作成本很低,而被感知的失败风险很高。这提供了有关我们应该期望此类道德建议在其范围,规模和内容方面取得成功的条件的见解。

Policy and guideline proposals for ethical artificial-intelligence research have proliferated in recent years. These are supposed to guide the socially-responsible development of AI for the common good. However, there typically exist incentives for non-cooperation (i.e., non-adherence to such policies and guidelines); and, these proposals often lack effective mechanisms to enforce their own normative claims. The situation just described constitutes a social dilemma; namely, a situation where no one has an individual incentive to cooperate, though mutual cooperation would lead to the best outcome for all involved. In this paper, we use stochastic evolutionary game dynamics to model this social dilemma in the context of the ethical development of artificial intelligence. This formalism allows us to isolate variables that may be intervened upon, thus providing actionable suggestions for increased cooperation amongst numerous stakeholders in AI. Our results show how stochastic effects can help make cooperation viable in such a scenario. They suggest that coordination for a common good should be attempted in smaller groups in which the cost for cooperation is low, and the perceived risk of failure is high. This provides insight into the conditions under which we should expect such ethics proposals to be successful with regard to their scope, scale, and content.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源