论文标题

Braggnn:使用深度学习的快速X射线Bragg峰分析

BraggNN: Fast X-ray Bragg Peak Analysis Using Deep Learning

论文作者

Liu, Zhengchun, Sharma, Hemant, Park, Jun-Sang, Kenesei, Peter, Miceli, Antonino, Almer, Jonathan, Kettimuthu, Rajkumar, Foster, Ian

论文摘要

基于X射线衍射的显微镜技术,例如高能量衍射显微镜依赖于高精度的衍射峰位置的知识。这些位置通常是通过将观察到的区域检测器数据中的强度拟合到理论峰形(例如伪VOIGT)来计算的。随着实验变得更加复杂,检测器技术的发展,这种峰值检测和形状拟合的计算成本成为实时反馈所需的快速分析的最大障碍。为此,我们提出了Braggnn,这是一种基于深度学习的方法,可以比传统的伪VOIGT峰拟合更快地确定峰位置。当应用于测试数据集时,Braggnn的误差分别为75%和95%的峰的误差小于0.29和0.57像素。当应用于真实的实验数据集时,与使用使用常规2D Pseudo-Voigt拟合确定的峰位置相比,使用Braggnn计算的峰位置的3D重建平均得出15%的结果。深度学习方法实施和特殊用途模型推理加速器的最新进展使Braggnn可以相对于常规方法提供巨大的性能改进,例如,运行(例如,在带有开箱即用软件的消费类GPU卡上的传统方法)比传统方法快200倍以上。

X-ray diffraction based microscopy techniques such as High Energy Diffraction Microscopy rely on knowledge of the position of diffraction peaks with high precision. These positions are typically computed by fitting the observed intensities in area detector data to a theoretical peak shape such as pseudo-Voigt. As experiments become more complex and detector technologies evolve, the computational cost of such peak detection and shape fitting becomes the biggest hurdle to the rapid analysis required for real-time feedback during in-situ experiments. To this end, we propose BraggNN, a deep learning-based method that can determine peak positions much more rapidly than conventional pseudo-Voigt peak fitting. When applied to a test dataset, BraggNN gives errors of less than 0.29 and 0.57 pixels, relative to the conventional method, for 75% and 95% of the peaks, respectively. When applied to a real experimental dataset, a 3D reconstruction that used peak positions computed by BraggNN yields 15% better results on average as compared to a reconstruction obtained using peak positions determined using conventional 2D pseudo-Voigt fitting. Recent advances in deep learning method implementations and special-purpose model inference accelerators allow BraggNN to deliver enormous performance improvements relative to the conventional method, running, for example, more than 200 times faster than a conventional method on a consumer-class GPU card with out-of-the-box software.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源