论文标题
RESA:用于泳道检测的经常性功能偏移聚合器
RESA: Recurrent Feature-Shift Aggregator for Lane Detection
论文作者
论文摘要
车道检测是自动驾驶中最重要的任务之一。由于各种复杂的场景(例如,严重的阻塞,模棱两可的车道等)以及泳道注释中固有的稀疏监督信号,车道检测任务仍然具有挑战性。因此,普通的卷积神经网络(CNN)很难在一般场景中训练从原始图像中获取微妙的车道特征。在本文中,我们提出了一个名为Recurrent功能转移聚合器(RESA)的新型模块,以富含使用普通CNN的初步特征提取后的车道功能。 RESA利用了强大的车道形状先验,并捕获了跨行和列的像素的空间关系。它在垂直和水平方向上反复地移动切片特征图,并使每个像素能够收集全局信息。 RESA可以通过汇总切成薄片的特征图来准确地猜测具有弱外观线索的具有挑战性的场景。此外,我们提出了一个双边上采样解码器,该解码器在上采样阶段结合了粗粒和细尾的特征。它可以将低分辨率特征映射恢复到像素优先预测中。我们的方法在两个流行的车道检测基准(Culane和Tusimple)上实现了最先进的结果。代码已在以下网址提供:https://github.com/zjulearning/resa。
Lane detection is one of the most important tasks in self-driving. Due to various complex scenarios (e.g., severe occlusion, ambiguous lanes, etc.) and the sparse supervisory signals inherent in lane annotations, lane detection task is still challenging. Thus, it is difficult for the ordinary convolutional neural network (CNN) to train in general scenes to catch subtle lane feature from the raw image. In this paper, we present a novel module named REcurrent Feature-Shift Aggregator (RESA) to enrich lane feature after preliminary feature extraction with an ordinary CNN. RESA takes advantage of strong shape priors of lanes and captures spatial relationships of pixels across rows and columns. It shifts sliced feature map recurrently in vertical and horizontal directions and enables each pixel to gather global information. RESA can conjecture lanes accurately in challenging scenarios with weak appearance clues by aggregating sliced feature map. Moreover, we propose a Bilateral Up-Sampling Decoder that combines coarse-grained and fine-detailed features in the up-sampling stage. It can recover the low-resolution feature map into pixel-wise prediction meticulously. Our method achieves state-of-the-art results on two popular lane detection benchmarks (CULane and Tusimple). Code has been made available at: https://github.com/ZJULearning/resa.