论文标题

不变的结构学习,以更好地概括和因果解释性

Invariant Structure Learning for Better Generalization and Causal Explainability

论文作者

Ge, Yunhao, Arik, Sercan Ö., Yoon, Jinsung, Xu, Ao, Itti, Laurent, Pfister, Tomas

论文摘要

学习数据背后的因果结构对于改善概括和获得高质量的解释非常宝贵。我们提出了一个新颖的框架,不变的结构学习(ISL),旨在通过利用概括作为指示来改善因果结构发现。 ISL将数据分配到不同的环境中,并通过施加一致性约束来学习一个在不同环境中不变的结构。然后,聚集机制基于图形结构选择最佳分类器,该图形结构与从单个环境中学到的结构相比,反映数据中的因果机制。此外,我们将ISL扩展到一个自制的学习环境,在该设置中,准确的因果结构发现不依赖任何标签。这种自我监督的ISL通过迭代设置不同的节点作为目标来利用不变的因果关系。在合成和现实世界数据集上,我们证明了ISL准确地发现因果结构,优于替代方法,并对具有显着分布变化的数据集产生了卓越的概括。

Learning the causal structure behind data is invaluable for improving generalization and obtaining high-quality explanations. We propose a novel framework, Invariant Structure Learning (ISL), that is designed to improve causal structure discovery by utilizing generalization as an indication. ISL splits the data into different environments, and learns a structure that is invariant to the target across different environments by imposing a consistency constraint. An aggregation mechanism then selects the optimal classifier based on a graph structure that reflects the causal mechanisms in the data more accurately compared to the structures learnt from individual environments. Furthermore, we extend ISL to a self-supervised learning setting where accurate causal structure discovery does not rely on any labels. This self-supervised ISL utilizes invariant causality proposals by iteratively setting different nodes as targets. On synthetic and real-world datasets, we demonstrate that ISL accurately discovers the causal structure, outperforms alternative methods, and yields superior generalization for datasets with significant distribution shifts.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源