论文标题
YOLO2U-NET:检测引导的3D实例分割以进行显微镜检查
YOLO2U-Net: Detection-Guided 3D Instance Segmentation for Microscopy
论文作者
论文摘要
显微镜成像技术对生物结构的表征和分析具有重要作用。由于这些技术通常通过堆叠2D投影来使细胞的3D可视化,因此$ z $ - 轴的平面激发和低分辨率等问题可能会带来挑战(甚至对于人类专家),以检测3D体积中的单个细胞,因为这些非重叠的细胞可能似乎是重叠的。在这项工作中,我们引入了一种综合方法,用于精确的3D实例分割脑组织中的细胞。所提出的方法将2D Yolo检测方法与多视图融合算法结合在一起,以构建细胞的3D定位。接下来,3D边界框以及数据量将输入到一个3D U-NET网络,该网络旨在将每个3D边界框中的主要单元格分段,然后又依次进行整个卷中的单元格进行实例分割。与当前基于深度学习的3D实例分割方法相比,所提出方法的有希望的性能。
Microscopy imaging techniques are instrumental for characterization and analysis of biological structures. As these techniques typically render 3D visualization of cells by stacking 2D projections, issues such as out-of-plane excitation and low resolution in the $z$-axis may pose challenges (even for human experts) to detect individual cells in 3D volumes as these non-overlapping cells may appear as overlapping. In this work, we introduce a comprehensive method for accurate 3D instance segmentation of cells in the brain tissue. The proposed method combines the 2D YOLO detection method with a multi-view fusion algorithm to construct a 3D localization of the cells. Next, the 3D bounding boxes along with the data volume are input to a 3D U-Net network that is designed to segment the primary cell in each 3D bounding box, and in turn, to carry out instance segmentation of cells in the entire volume. The promising performance of the proposed method is shown in comparison with some current deep learning-based 3D instance segmentation methods.