论文标题
损失是概率依赖图的不一致图:选择您的模型,而不是您的损失功能
Loss as the Inconsistency of a Probabilistic Dependency Graph: Choose Your Model, Not Your Loss Function
论文作者
论文摘要
在一个拥有大量损失功能的世界中,我们认为它们之间的选择不是味道或务实的问题,而是模型。概率降低图(PDGS)是概率模型,配备了“不一致”的量度。我们证明,许多标准损失函数都会出现,因为自然PDG描述了适当的情况,并使用相同的方法来证明正规化器和先验者之间的众所周知的联系是合理的。我们还表明,PDG不一致捕获了大量的统计差异,并以这种方式思考它们的细节好处,包括一种直观的视觉语言,用于引发它们之间的不平等现象。在各种推论中,我们发现Elbo是潜在变量模型的某种不透明的目标,并且它的变体是免费的,这是从毫无争议的建模假设中免费出现的 - 与其相应边界的简单图形证明也是如此。最后,我们观察到在PDG为因子图的情况下,不一致成为对数分区函数(自由能)。
In a world blessed with a great diversity of loss functions, we argue that that choice between them is not a matter of taste or pragmatics, but of model. Probabilistic depencency graphs (PDGs) are probabilistic models that come equipped with a measure of "inconsistency". We prove that many standard loss functions arise as the inconsistency of a natural PDG describing the appropriate scenario, and use the same approach to justify a well-known connection between regularizers and priors. We also show that the PDG inconsistency captures a large class of statistical divergences, and detail benefits of thinking of them in this way, including an intuitive visual language for deriving inequalities between them. In variational inference, we find that the ELBO, a somewhat opaque objective for latent variable models, and variants of it arise for free out of uncontroversial modeling assumptions -- as do simple graphical proofs of their corresponding bounds. Finally, we observe that inconsistency becomes the log partition function (free energy) in the setting where PDGs are factor graphs.