论文标题
用于交通标志检测的金字塔变压器
Pyramid Transformer for Traffic Sign Detection
论文作者
论文摘要
在自动驾驶汽车和自动驾驶系统的视觉系统中,交通标志检测是至关重要的任务。最近,基于变压器的新型模型已为各种计算机视觉任务取得了令人鼓舞的结果。我们仍然观察到,香草vit无法在交通符号检测中产生令人满意的结果,因为数据集的整体大小很小,而且交通标志的类别分布非常不平衡。为了克服这个问题,本文提出了具有局部机制的新型金字塔变压器。具体而言,金字塔变压器具有几个空间金字塔还原层,可通过使用严重的卷积来缩小输入图像,并将输入图像嵌入具有丰富的多尺度上下文的令牌中。此外,它继承了固有的量表不变性归纳偏差,并能够在各种尺度上学习对象的本地功能表示,从而增强了网络鲁棒性,以与流量符号的尺寸差异。实验是在德国交通标志检测基准(GTSDB)上进行的。结果证明了交通符号检测任务中提出的模型的优越性。更具体地说,将金字塔变压器应用于GTSDB上的77.8%地图,将其应用于级联RCNN作为骨架,该骨干超过了最知名且广泛使用的最先进模型。
Traffic sign detection is a vital task in the visual system of self-driving cars and the automated driving system. Recently, novel Transformer-based models have achieved encouraging results for various computer vision tasks. We still observed that vanilla ViT could not yield satisfactory results in traffic sign detection because the overall size of the datasets is very small and the class distribution of traffic signs is extremely unbalanced. To overcome this problem, a novel Pyramid Transformer with locality mechanisms is proposed in this paper. Specifically, Pyramid Transformer has several spatial pyramid reduction layers to shrink and embed the input image into tokens with rich multi-scale context by using atrous convolutions. Moreover, it inherits an intrinsic scale invariance inductive bias and is able to learn local feature representation for objects at various scales, thereby enhancing the network robustness against the size discrepancy of traffic signs. The experiments are conducted on the German Traffic Sign Detection Benchmark (GTSDB). The results demonstrate the superiority of the proposed model in the traffic sign detection tasks. More specifically, Pyramid Transformer achieves 77.8% mAP on GTSDB when applied to the Cascade RCNN as the backbone, which surpasses most well-known and widely-used state-of-the-art models.