论文标题
使用深度学习从Brightfield细胞图像中提取定量生物学信息
Extracting quantitative biological information from brightfield cell images using deep learning
论文作者
论文摘要
细胞结构的定量分析对于生物医学和药物研究至关重要。标准成像方法取决于荧光显微镜,其中感兴趣的细胞结构由化学染色技术标记。但是,除了耗时,劳动密集型和昂贵外,这些技术通常具有侵入性,有时甚至对细胞有毒。在这里,我们介绍了一种基于条件生成对抗神经网络(CGAN)对Brightfield图像的分析的替代深度学习驱动方法。我们表明,这种方法可以从Brightfield图像中提取信息以生成几乎染色的图像,这些图像可用于细胞结构的后续下游定量分析。具体而言,我们使用人类干细胞衍生的脂肪细胞(脂肪细胞)的明亮场图像训练CGAN对染色脂质液滴,细胞质和细胞核,这些图像特别感兴趣纳米医学和疫苗的发育。随后,我们使用这些虚拟染色的图像来提取有关这些细胞结构的定量度量。与标准化学染色相比,生成几乎染色的荧光图像较不侵入性,更便宜,更可重现。此外,它可以释放其他分析探针的荧光显微镜通道,从而增加了可以从每个细胞中提取的信息量。
Quantitative analysis of cell structures is essential for biomedical and pharmaceutical research. The standard imaging approach relies on fluorescence microscopy, where cell structures of interest are labeled by chemical staining techniques. However, these techniques are often invasive and sometimes even toxic to the cells, in addition to being time-consuming, labor-intensive, and expensive. Here, we introduce an alternative deep-learning-powered approach based on the analysis of brightfield images by a conditional generative adversarial neural network (cGAN). We show that this approach can extract information from the brightfield images to generate virtually-stained images, which can be used in subsequent downstream quantitative analyses of cell structures. Specifically, we train a cGAN to virtually stain lipid droplets, cytoplasm, and nuclei using brightfield images of human stem-cell-derived fat cells (adipocytes), which are of particular interest for nanomedicine and vaccine development. Subsequently, we use these virtually-stained images to extract quantitative measures about these cell structures. Generating virtually-stained fluorescence images is less invasive, less expensive, and more reproducible than standard chemical staining; furthermore, it frees up the fluorescence microscopy channels for other analytical probes, thus increasing the amount of information that can be extracted from each cell.