论文标题

通过利他的多元化神经网络的集体学习

Collective Learning by Ensembles of Altruistic Diversifying Neural Networks

论文作者

Brazowski, Benjamin, Schneidman, Elad

论文摘要

结合神经网络集合的预测通常优于最佳单个网络。这样的合奏通常是独立训练的,其优越的“人群智慧”源自网络之间的差异。由于特定的本地信息共享,社会互动动物群体中的集体觅食和决策通常会改善甚至最佳。因此,我们通过相互作用的神经网络的集合提出了一个模型,旨在最大程度地提高其性能,也可以与其他网络的功能关系。我们表明,交互网络的集合优于独立网络,当网络之间的耦合增加多样性并降低单个网络的性能时,达到了最佳的集合性能。因此,即使没有整体的全球目标,最佳的集体行为也从网络之间的本地交互中出现。我们展示了最佳耦合强度与整体尺寸的缩放,并且这些合奏中的网络在功能上专门提供了功能,并在评估中变得更加“自信”。此外,与受独立训练的网络相比,最佳的共同学习网络在结构上有所不同,依赖于稀疏活动,更广泛的突触权重和更高的点火率。最后,我们探索基于交互的共学习,作为扩展和增强合奏的框架。

Combining the predictions of collections of neural networks often outperforms the best single network. Such ensembles are typically trained independently, and their superior `wisdom of the crowd' originates from the differences between networks. Collective foraging and decision making in socially interacting animal groups is often improved or even optimal thanks to local information sharing between conspecifics. We therefore present a model for co-learning by ensembles of interacting neural networks that aim to maximize their own performance but also their functional relations to other networks. We show that ensembles of interacting networks outperform independent ones, and that optimal ensemble performance is reached when the coupling between networks increases diversity and degrades the performance of individual networks. Thus, even without a global goal for the ensemble, optimal collective behavior emerges from local interactions between networks. We show the scaling of optimal coupling strength with ensemble size, and that networks in these ensembles specialize functionally and become more `confident' in their assessments. Moreover, optimal co-learning networks differ structurally, relying on sparser activity, a wider range of synaptic weights, and higher firing rates - compared to independently trained networks. Finally, we explore interactions-based co-learning as a framework for expanding and boosting ensembles.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源