论文标题
基于深度学习的超分辨率和XMM-Newton图像的删除
Deep Learning-Based Super-Resolution and De-Noising for XMM-Newton Images
论文作者
论文摘要
在过去的几年中,基于人工智能的图像增强领域一直在迅速发展,并能够在非媒体图像上产生令人印象深刻的结果。在这项工作中,我们介绍了基于机器学习的超级分辨率(SR)和De-Noising(DN)的首次应用,以增强欧洲航天局XMM-Newton望远镜的X射线图像。使用欧洲光子成像摄像头PN检测器(EPIC-PN)的频段[0.5,2] keV中的XMM-Newton图像,我们开发了XMM-Superres和XMM-denoise深度学习模型,这些模型可以从真实观察值中产生增强的SR和DN图像。对型号进行了训练,可以在现实的XMM-Newton模拟上进行训练,以便XMM-Superres将输出具有较小点传播功能并具有改善噪声特性的两倍的图像。 XMM-DENOISE模型经过训练,可以产生2.5倍的图像,将输入时间从20至50 Ks产生。当对真实图像进行测试时,DN将图像质量提高了8.2%,该图像由全球峰信号与噪声比例量化。这些增强的图像允许识别原本或用传统方法过滤/平滑的图像中难以感知的功能。我们证明了使用深度学习模型来增强XMM-Newton X射线图像以增加其科学价值的可行性,从而使XMM-Newton Archive的遗产有益于其科学价值。
The field of artificial intelligence based image enhancement has been rapidly evolving over the last few years and is able to produce impressive results on non-astronomical images. In this work we present the first application of Machine Learning based super-resolution (SR) and de-noising (DN) to enhance X-ray images from the European Space Agency's XMM-Newton telescope. Using XMM-Newton images in band [0.5, 2] keV from the European Photon Imaging Camera pn detector (EPIC-pn), we develop XMM-SuperRes and XMM-DeNoise deep learning-based models that can generate enhanced SR and DN images from real observations. The models are trained on realistic XMM-Newton simulations such that XMM-SuperRes will output images with two times smaller point-spread function and with improved noise characteristics. The XMM-DeNoise model is trained to produce images with 2.5x the input exposure time from 20 to 50 ks. When tested on real images, DN improves the image quality by 8.2%, as quantified by the global peak-signal-to-noise ratio. These enhanced images allow identification of features that are otherwise hard or impossible to perceive in the original or in filtered/smoothed images with traditional methods. We demonstrate the feasibility of using our deep learning models to enhance XMM-Newton X-ray images to increase their scientific value in a way that could benefit the legacy of the XMM-Newton archive.