论文标题

学习插补的单个模型(技术报告)

Learning Individual Models for Imputation (Technical Report)

论文作者

Zhang, Aoqian, Song, Shaoxu, Sun, Yu, Wang, Jianmin

论文摘要

缺少数值的值是普遍的,例如,由于传感器的读数不可靠,收集和异质源之间的传输。与在有限域上进行分类的数据归类不同,数值遭受了两个问题:(1)稀疏问题,由于(几乎)无限域,不完整的元组可能没有足够的完整邻居共享相同/相似的值; (2)异质性问题,不同的组可能不适合相同的(回归)模型。在这项研究中,通过有条件地在某些元素而不是整个关系的有条件依赖性的启发下,我们建议对每个完整的元组与邻居一起分别学习回归模型。我们的IIM,通过单个模型进行归档,因此不再依赖于K完整邻居之间共享相似的值进行插补,而是通过上述学到的个体(不需要相同的)模型利用其回归结果。值得注意的是,我们表明,在个人学习中考虑的学习邻居数量的极端环境下,现有的一些方法确实是我们IIM的特殊情况。从这个意义上讲,适当数量的邻居对于学习单个模型至关重要(避免过度合身或不合适)。我们建议在不同数量的邻居上自适应学习各个邻居的单个模型。通过设计有效的增量计算,学习模型的时间复杂性从线性减少到常数。实际数据的实验表明,我们的自适应学习IIM比现有方法具有更高的归合精度。

Missing numerical values are prevalent, e.g., owing to unreliable sensor reading, collection and transmission among heterogeneous sources. Unlike categorized data imputation over a limited domain, the numerical values suffer from two issues: (1) sparsity problem, the incomplete tuple may not have sufficient complete neighbors sharing the same/similar values for imputation, owing to the (almost) infinite domain; (2) heterogeneity problem, different tuples may not fit the same (regression) model. In this study, enlightened by the conditional dependencies that hold conditionally over certain tuples rather than the whole relation, we propose to learn a regression model individually for each complete tuple together with its neighbors. Our IIM, Imputation via Individual Models, thus no longer relies on sharing similar values among the k complete neighbors for imputation, but utilizes their regression results by the aforesaid learned individual (not necessary the same) models. Remarkably, we show that some existing methods are indeed special cases of our IIM, under the extreme settings of the number l of learning neighbors considered in individual learning. In this sense, a proper number l of neighbors is essential to learn the individual models (avoid over-fitting or under-fitting). We propose to adaptively learn individual models over various number l of neighbors for different complete tuples. By devising efficient incremental computation, the time complexity of learning a model reduces from linear to constant. Experiments on real data demonstrate that our IIM with adaptive learning achieves higher imputation accuracy than the existing approaches.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源