论文标题
刀叉刀:刑事司法算法风险评估工具的预测不一致
Forks Over Knives: Predictive Inconsistency in Criminal Justice Algorithmic Risk Assessment Tools
论文作者
论文摘要
大数据和算法风险预测工具有望通过减少人类偏见和决策中的矛盾来改善刑事司法系统。然而,在开发,测试和部署这些社会技术工具时,不同的,同样可行的选择可能会导致同一个人的不同预测风险评分。从机器学习,统计,社会学,犯罪学,法律,哲学和经济学中综合了各种观点,我们将这种现象概念化为预测性不一致。我们描述了算法风险评估工具开发和部署的不同阶段的预测不一致的来源,并考虑未来的技术发展如何扩大预测性不一致。但是,我们认为,在一个多元化和多元化的社会中,我们不应该期望完全消除预测性的不一致。相反,为了加强算法风险预测工具的法律,政治和科学合法性,我们建议识别和记录相关且合理的“分叉途径”,以启用个人层面预测性不一致的可量化,可重复的多宇宙和规范曲线分析。
Big data and algorithmic risk prediction tools promise to improve criminal justice systems by reducing human biases and inconsistencies in decision making. Yet different, equally-justifiable choices when developing, testing, and deploying these sociotechnical tools can lead to disparate predicted risk scores for the same individual. Synthesizing diverse perspectives from machine learning, statistics, sociology, criminology, law, philosophy and economics, we conceptualize this phenomenon as predictive inconsistency. We describe sources of predictive inconsistency at different stages of algorithmic risk assessment tool development and deployment and consider how future technological developments may amplify predictive inconsistency. We argue, however, that in a diverse and pluralistic society we should not expect to completely eliminate predictive inconsistency. Instead, to bolster the legal, political, and scientific legitimacy of algorithmic risk prediction tools, we propose identifying and documenting relevant and reasonable "forking paths" to enable quantifiable, reproducible multiverse and specification curve analyses of predictive inconsistency at the individual level.