论文标题

学习用于使用张量网络的主动推断的生成模型

Learning Generative Models for Active Inference using Tensor Networks

论文作者

Wauthier, Samuel T., Vanhecke, Bram, Verbelen, Tim, Dhoedt, Bart

论文摘要

主动推论为自主代理人的行为和学习提供了一个一般框架。它指出,代理商将尝试最大程度地减少其变分的自由能,这是根据观察,内部状态和政策的信念定义的。传统上,必须手动指定离散主动推理模型的每个方面,即通过手动定义隐藏的状态空间结构以及所需的分布,例如可能性和过渡概率。最近,已经努力从使用深层神经网络的观察结果自动学习状态空间表示。在本文中,我们提出了一种使用量子物理启发的张量网络的学习状态空间的新方法。张量网络代表量子状态的概率性质以及降低大状态空间的能力使张量网络成为自然推断的自然候选者。我们展示了如何将张量网络用作顺序数据的生成模型。此外,我们展示了如何从这种生成模型中获得信念,以及主动推理剂如何使用这些信念来计算预期的自由能。最后,我们演示了有关经典T迷宫环境的方法。

Active inference provides a general framework for behavior and learning in autonomous agents. It states that an agent will attempt to minimize its variational free energy, defined in terms of beliefs over observations, internal states and policies. Traditionally, every aspect of a discrete active inference model must be specified by hand, i.e. by manually defining the hidden state space structure, as well as the required distributions such as likelihood and transition probabilities. Recently, efforts have been made to learn state space representations automatically from observations using deep neural networks. In this paper, we present a novel approach of learning state spaces using quantum physics-inspired tensor networks. The ability of tensor networks to represent the probabilistic nature of quantum states as well as to reduce large state spaces makes tensor networks a natural candidate for active inference. We show how tensor networks can be used as a generative model for sequential data. Furthermore, we show how one can obtain beliefs from such a generative model and how an active inference agent can use these to compute the expected free energy. Finally, we demonstrate our method on the classic T-maze environment.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源