论文标题
深机学习辅助的多光子显微镜,以减少光线曝光和加速成像
Deep machine learning-assisted multiphoton microscopy to reduce light exposure and expedite imaging
论文作者
论文摘要
两光子激发荧光(2PEF)允许对厚度约1毫米的组织成像。通常,减少荧光激发暴露会降低图像的质量。但是,使用深度学习超级分辨率技术,这些低分辨率图像可以转换为高分辨率图像。这项工作通过应用深度学习来探索改善人体组织成像,以最大程度地提高图像质量,同时减少荧光激发暴露。我们分析了两种方法:基于U-NET的方法,以及一种基于贴片的回归方法。两种方法均在皮肤数据集和眼睛数据集上评估。眼睛数据集包括1200个配对的高功率和视网膜器官的低功率图像。皮肤数据集包含每个样品人类皮肤的多个帧。高分辨率图像是通过平均每个样本的70帧而形成的,而低分辨率图像是通过为每个样品的前7帧和15帧而形成的。皮肤数据集包括每个分辨率级别的550张图像。我们跟踪两种方法的两种性能度量:平均平方误差(MSE)和结构相似性指数度量(SSIM)。对于眼睛数据集,贴片方法的平均MSE为27,611,而U-NET方法的平均MSE为146,855,平均SSIM为0.636,而U-NET方法为0.607。对于皮肤数据集,补丁方法的平均MSE为3.768,而U-NET方法的平均MSE为4.032,而U-NET方法的平均SSIM为0.824。尽管在图像质量上的性能更好,但在比较预测速度时,贴片方法比U-NET方法差,需要303秒才能预测一个图像,而U-NET方法的预测速度比不到一秒钟。
Two-photon excitation fluorescence (2PEF) allows imaging of tissue up to about one millimeter in thickness. Typically, reducing fluorescence excitation exposure reduces the quality of the image. However, using deep learning super resolution techniques, these low-resolution images can be converted to high-resolution images. This work explores improving human tissue imaging by applying deep learning to maximize image quality while reducing fluorescence excitation exposure. We analyze two methods: a method based on U-Net, and a patch-based regression method. Both methods are evaluated on a skin dataset and an eye dataset. The eye dataset includes 1200 paired high power and low power images of retinal organoids. The skin dataset contains multiple frames of each sample of human skin. High-resolution images were formed by averaging 70 frames for each sample and low-resolution images were formed by averaging the first 7 and 15 frames for each sample. The skin dataset includes 550 images for each of the resolution levels. We track two measures of performance for the two methods: mean squared error (MSE) and structural similarity index measure (SSIM). For the eye dataset, the patches method achieves an average MSE of 27,611 compared to 146,855 for the U-Net method, and an average SSIM of 0.636 compared to 0.607 for the U-Net method. For the skin dataset, the patches method achieves an average MSE of 3.768 compared to 4.032 for the U-Net method, and an average SSIM of 0.824 compared to 0.783 for the U-Net method. Despite better performance on image quality, the patches method is worse than the U-Net method when comparing the speed of prediction, taking 303 seconds to predict one image compared to less than one second for the U-Net method.