论文标题
快速准确的重要性加权以纠正样品偏差
Fast and Accurate Importance Weighting for Correcting Sample Bias
论文作者
论文摘要
对于适当的统计估计,数据集中的偏差可能非常有害。为了应对这个问题,已经开发了重要的加权方法,以将任何有偏分的分布与其相应的目标无偏分布相匹配。如今,开创性内核平均匹配(KMM)方法仍然被视为该研究领域的最新技术。但是,该方法的主要缺点之一是大型数据集的计算负担。基于Huang等人先前的作品。 (2007)和De Mathelin等。 (2021),我们得出了一种新颖的重要性加权算法,该算法通过使用神经网络来预测实例权重来扩展到大型数据集。我们在多个公共数据集上显示,在各种样本偏见下,我们提出的方法大大减少了大数据集上的计算时间,同时与其他重要的加权方法相比,保持了相似的样本偏差校正性能。所提出的方法似乎是唯一能够在合理时间内使用多达200万个数据的大型数据集进行相关重新加权的方法。
Bias in datasets can be very detrimental for appropriate statistical estimation. In response to this problem, importance weighting methods have been developed to match any biased distribution to its corresponding target unbiased distribution. The seminal Kernel Mean Matching (KMM) method is, nowadays, still considered as state of the art in this research field. However, one of the main drawbacks of this method is the computational burden for large datasets. Building on previous works by Huang et al. (2007) and de Mathelin et al. (2021), we derive a novel importance weighting algorithm which scales to large datasets by using a neural network to predict the instance weights. We show, on multiple public datasets, under various sample biases, that our proposed approach drastically reduces the computational time on large dataset while maintaining similar sample bias correction performance compared to other importance weighting methods. The proposed approach appears to be the only one able to give relevant reweighting in a reasonable time for large dataset with up to two million data.