论文标题
复发性神经网络的体积荧光显微镜
Recurrent neural network-based volumetric fluorescence microscopy
论文作者
论文摘要
使用荧光显微镜对样品进行体积成像在包括物理,医学和生命科学在内的各个领域都起着重要作用。在这里,我们报告了一个基于深度学习的体积图像推理框架,该框架使用了2D图像,这些图像由标准的宽视野荧光显微镜稀疏捕获,在样品体积内的任意轴向位置处。通过复发性的卷积神经网络,我们将其称为复发性MZ,来自样品中几个轴向平面的2D荧光信息已明确合并,以数字化在扩展的场地上重建样品体积。使用秀丽隐杆线虫和纳米片样品上的实验,已证明复发性MZ可将63x/1.4NA物镜的景点增加约50倍,从而减少了30倍的轴向扫描数量,以使图像相同样品体积。我们进一步说明了这种经常性网络对3D成像的概括,通过显示其对不同成像条件的韧性,包括例如,输入图像的不同序列,涵盖了各种轴向排列和未知的轴向定位误差。 Recurrent-MZ展示了复发性神经网络在微观图像重建中的首次应用,并提供了灵活而快速的体积成像框架,从而克服了当前3D扫描显微镜工具的局限性。
Volumetric imaging of samples using fluorescence microscopy plays an important role in various fields including physical, medical and life sciences. Here we report a deep learning-based volumetric image inference framework that uses 2D images that are sparsely captured by a standard wide-field fluorescence microscope at arbitrary axial positions within the sample volume. Through a recurrent convolutional neural network, which we term as Recurrent-MZ, 2D fluorescence information from a few axial planes within the sample is explicitly incorporated to digitally reconstruct the sample volume over an extended depth-of-field. Using experiments on C. Elegans and nanobead samples, Recurrent-MZ is demonstrated to increase the depth-of-field of a 63x/1.4NA objective lens by approximately 50-fold, also providing a 30-fold reduction in the number of axial scans required to image the same sample volume. We further illustrated the generalization of this recurrent network for 3D imaging by showing its resilience to varying imaging conditions, including e.g., different sequences of input images, covering various axial permutations and unknown axial positioning errors. Recurrent-MZ demonstrates the first application of recurrent neural networks in microscopic image reconstruction and provides a flexible and rapid volumetric imaging framework, overcoming the limitations of current 3D scanning microscopy tools.