论文标题
通过各种湍流对静态和动态场景的图像重建
Image Reconstruction of Static and Dynamic Scenes through Anisoplanatic Turbulence
论文作者
论文摘要
基于地面的远程无源成像系统通常由于湍流氛围而遭受降解的图像质量。尽管存在消除此类湍流扭曲的方法,但许多方法仅限于无法扩展到动态场景的静态序列。另外,湍流的物理学通常不会集成到图像重建算法中,从而使方法的物理基础变得弱。在本文中,我们提出了一种在静态和动态序列中均缓解大气湍流的统一方法。与现有方法相比,我们能够利用(i)(i)一种新型的时空非本地平均方法来构建可靠的参考框架,(ii)几何一致性和清晰度度量,以产生幸运框架,(iii)盲型磁化点的点扩散功能的物理限制的先前模型。基于合成和实际的长距离湍流序列的实验结果验证了所提出的方法的性能。
Ground based long-range passive imaging systems often suffer from degraded image quality due to a turbulent atmosphere. While methods exist for removing such turbulent distortions, many are limited to static sequences which cannot be extended to dynamic scenes. In addition, the physics of the turbulence is often not integrated into the image reconstruction algorithms, making the physics foundations of the methods weak. In this paper, we present a unified method for atmospheric turbulence mitigation in both static and dynamic sequences. We are able to achieve better results compared to existing methods by utilizing (i) a novel space-time non-local averaging method to construct a reliable reference frame, (ii) a geometric consistency and a sharpness metric to generate the lucky frame, (iii) a physics-constrained prior model of the point spread function for blind deconvolution. Experimental results based on synthetic and real long-range turbulence sequences validate the performance of the proposed method.