论文标题

使用延长和定向周期的加大结肠镜检查进行有损图像翻译

Augmenting Colonoscopy using Extended and Directional CycleGAN for Lossy Image Translation

论文作者

Mathew, Shawn, Nadeem, Saad, Kumari, Sruti, Kaufman, Arie

论文摘要

大肠癌筛查方式,例如光学结肠镜检查(OC)和虚拟结肠镜检查(VC),对于诊断和最终去除息肉(结肠癌的前体)至关重要。非侵入性VC通常用于检查息肉的3D重建结肠(从CT扫描),如果发现,则执行OC程序以通过内窥镜物理横穿结肠并去除这些息肉。在本文中,我们提出了一个深度学习框架,扩展和定向自行车gan,以供OC和VC之间的有损失的图像到图像对图像转换,以增强VC的比例符合深度信息,并增强VC,并增加具有患者特定纹理的VC,具有特定于OC的特定于患者的纹理,颜色和镜面高光(例如,用于现实的Polyp Polyp综合)。 OC和VC都包含结构信息,但在OC中通过其他特定于患者的纹理和镜面亮点遮挡了它,因此从OC转换为VC损失。现有的自行车方法无法处理有损转换。为了解决这一缺点,我们引入了延长的周期一致性损失,该损失比较了VC域中的OC的几何结构。这种损失消除了Cyclean将OC信息嵌入VC域中的需求。为了处理更强的纹理和照明的去除,引入了定向歧视器以区分翻译方向(通过为鉴别器创建配对信息),而不是定向循环的标准循环gan。结合了延长的周期一致性损失和方向歧视器,我们显示了幻影,纹理VC以及真实息肉和正常结肠视频序列的比例一致性深度推断的最新结果。我们还提出了从3D VC模型中引入的颠簸的现实构造和扁平息肉合成的结果。代码/模型:https://github.com/nadeemlab/cep。

Colorectal cancer screening modalities, such as optical colonoscopy (OC) and virtual colonoscopy (VC), are critical for diagnosing and ultimately removing polyps (precursors of colon cancer). The non-invasive VC is normally used to inspect a 3D reconstructed colon (from CT scans) for polyps and if found, the OC procedure is performed to physically traverse the colon via endoscope and remove these polyps. In this paper, we present a deep learning framework, Extended and Directional CycleGAN, for lossy unpaired image-to-image translation between OC and VC to augment OC video sequences with scale-consistent depth information from VC, and augment VC with patient-specific textures, color and specular highlights from OC (e.g, for realistic polyp synthesis). Both OC and VC contain structural information, but it is obscured in OC by additional patient-specific texture and specular highlights, hence making the translation from OC to VC lossy. The existing CycleGAN approaches do not handle lossy transformations. To address this shortcoming, we introduce an extended cycle consistency loss, which compares the geometric structures from OC in the VC domain. This loss removes the need for the CycleGAN to embed OC information in the VC domain. To handle a stronger removal of the textures and lighting, a Directional Discriminator is introduced to differentiate the direction of translation (by creating paired information for the discriminator), as opposed to the standard CycleGAN which is direction-agnostic. Combining the extended cycle consistency loss and the Directional Discriminator, we show state-of-the-art results on scale-consistent depth inference for phantom, textured VC and for real polyp and normal colon video sequences. We also present results for realistic pendunculated and flat polyp synthesis from bumps introduced in 3D VC models. Code/models: https://github.com/nadeemlab/CEP.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源