论文标题

可解释的校准神经网络,用于分析和理解非弹性中子散射数据

Interpretable, calibrated neural networks for analysis and understanding of inelastic neutron scattering data

论文作者

Butler, Keith T., Le, Manh Duc, Thiyagalingam, Jeyarajan, Perring, Toby G.

论文摘要

深度神经网络为学习数据表示和功能提供了灵活的框架,将数据与其他属性相关联,并且经常声称在推断输入数据和所需属性之间的关系中实现“超人”的性能。然而,在非弹性中子散射实验的背景下,与许多其他科学场景一样,出现了许多问题:(i)缺乏标记的实验数据,(ii)结果缺乏不确定性量化,以及(iii)缺乏深层神经网络的解释性。在这项工作中,我们研究了所有三个问题的方法。我们使用模拟数据来训练深度神经网络,以区分两个半掺杂锰矿的可能的磁交换模型。我们将最近开发的确定性不确定性量化方法应用于分类的错误估计,并在此过程中证明了训练数据中仪器分辨率的重要性表示,以便在实验数据上进行可靠的估计。最后,我们使用类激活图来确定光谱的哪些区域对于网络达到的最终分类结果最重要。

Deep neural networks provide flexible frameworks for learning data representations and functions relating data to other properties and are often claimed to achieve 'super-human' performance in inferring relationships between input data and desired property. In the context of inelastic neutron scattering experiments, however, as in many other scientific scenarios, a number of issues arise: (i) scarcity of labelled experimental data, (ii) lack of uncertainty quantification on results, and (iii) lack of interpretability of the deep neural networks. In this work we examine approaches to all three issues. We use simulated data to train a deep neural network to distinguish between two possible magnetic exchange models of a half-doped manganite. We apply the recently developed deterministic uncertainty quantification method to provide error estimates for the classification, demonstrating in the process how important realistic representations of instrument resolution in the training data are for reliable estimates on experimental data. Finally we use class activation maps to determine which regions of the spectra are most important for the final classification result reached by the network.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源