论文标题

因果关系感知的反事实对特征表示形式所学的特征表示的混杂调整

Causality-aware counterfactual confounding adjustment for feature representations learned by deep models

论文作者

Neto, Elias Chaibub

论文摘要

因果建模已被认为是解决机器学习(ML)许多具有挑战性问题的潜在解决方案。在这里,我们描述了如何开发针对变形线性结构因果模型开发的最近提出的反事实方法,仍然可以用来解除深神经网络(DNN)模型学到的特征表示。关键见解是,通过使用分类层的SoftMax激活训练精确的DNN,然后采用在输出层作为我们的特征之前通过最后一层学到的表示形式,我们可以通过构造,学到的功能将符合(多类)逻辑回归模型,并将与标签线性相关联。结果,基于简单线性模型的解构方法可用于解除DNNS所学的特征表示。我们使用MNIST数据集的彩色版本验证了提出的方法。我们的结果说明了该方法如何在选择偏见产生的数据集变化的背景下有效地打击混淆并改善模型稳定性。

Causal modeling has been recognized as a potential solution to many challenging problems in machine learning (ML). Here, we describe how a recently proposed counterfactual approach developed to deconfound linear structural causal models can still be used to deconfound the feature representations learned by deep neural network (DNN) models. The key insight is that by training an accurate DNN using softmax activation at the classification layer, and then adopting the representation learned by the last layer prior to the output layer as our features, we have that, by construction, the learned features will fit well a (multi-class) logistic regression model, and will be linearly associated with the labels. As a consequence, deconfounding approaches based on simple linear models can be used to deconfound the feature representations learned by DNNs. We validate the proposed methodology using colored versions of the MNIST dataset. Our results illustrate how the approach can effectively combat confounding and improve model stability in the context of dataset shifts generated by selection biases.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源