论文标题
分析机器学习会议审查过程
Analyzing the Machine Learning Conference Review Process
论文作者
论文摘要
近年来,主流机器学习会议的参与者人数急剧增加,以及越来越多的观点。机器学习社区的成员可能会听取从接受决策的随机性到机构偏见的指控。在这项工作中,我们通过对2017年至2020年之间提交给ICLR的论文进行的全面研究对审查过程进行了严格的分析。我们在审查分数和接受决策中量化了可重复性/随机性,并检查分数是否与纸张影响相关。我们的发现表明,即使在控制了纸质质量之后,即使在接受/拒绝决定中,也有强大的机构偏见。此外,我们找到了性别差距的证据,女性作者的得分较低,接受率较低,而每张纸的引用较少,而与男性相比。我们以建议未来的会议组织者的建议结束了我们的工作。
Mainstream machine learning conferences have seen a dramatic increase in the number of participants, along with a growing range of perspectives, in recent years. Members of the machine learning community are likely to overhear allegations ranging from randomness of acceptance decisions to institutional bias. In this work, we critically analyze the review process through a comprehensive study of papers submitted to ICLR between 2017 and 2020. We quantify reproducibility/randomness in review scores and acceptance decisions, and examine whether scores correlate with paper impact. Our findings suggest strong institutional bias in accept/reject decisions, even after controlling for paper quality. Furthermore, we find evidence for a gender gap, with female authors receiving lower scores, lower acceptance rates, and fewer citations per paper than their male counterparts. We conclude our work with recommendations for future conference organizers.