论文标题

语义自我适应:通过单个样本增强概括

Semantic Self-adaptation: Enhancing Generalization with a Single Sample

论文作者

Bahmani, Sherwin, Hahn, Oliver, Zamfir, Eduard, Araslanov, Nikita, Cremers, Daniel, Roth, Stefan

论文摘要

缺乏域外概括是深层网络的语义分割的关键弱点。先前的研究取决于静态模型的假设i。例如,一旦训练过程完成,模型参数将在测试时间保持固定。在这项工作中,我们通过一种自适应方法来挑战这一前提,以进行语义分割,从而将推理过程调整为每个输入样本。自我适应在两个层面上运作。首先,它使用一致性正则化将卷积层的参数微调为输入图像。其次,在分批归一层中,自动适应在训练和从单个测试样本中得出的参考分布之间进行了插值。尽管这两种技术在文献中都是众所周知的,但它们的组合却将合成至真实的概括基准测试的新最新准确性设定为新的准确性。我们的实证研究表明,自我适应可能会补充培训时间的既定模型正规化实践,以改善网络概括到室外数据。我们的代码和预培训模型可在https://github.com/visinf/self-aptaptive上找到。

The lack of out-of-domain generalization is a critical weakness of deep networks for semantic segmentation. Previous studies relied on the assumption of a static model, i. e., once the training process is complete, model parameters remain fixed at test time. In this work, we challenge this premise with a self-adaptive approach for semantic segmentation that adjusts the inference process to each input sample. Self-adaptation operates on two levels. First, it fine-tunes the parameters of convolutional layers to the input image using consistency regularization. Second, in Batch Normalization layers, self-adaptation interpolates between the training and the reference distribution derived from a single test sample. Despite both techniques being well known in the literature, their combination sets new state-of-the-art accuracy on synthetic-to-real generalization benchmarks. Our empirical study suggests that self-adaptation may complement the established practice of model regularization at training time for improving deep network generalization to out-of-domain data. Our code and pre-trained models are available at https://github.com/visinf/self-adaptive.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源