论文标题
自组织的操作神经网络,用于严重图像恢复问题
Self-Organized Operational Neural Networks for Severe Image Restoration Problems
论文作者
论文摘要
基于卷积神经网络(CNN)的歧视性学习旨在通过从嘈杂清洁图像对的训练示例中学习来进行图像恢复。它已成为解决图像恢复的首选方法,并优于传统的非本地方法。但是,表现最佳的网络通常由许多卷积层和数百个神经元组成,可训练的参数超过数百万。我们声称这是由于基于卷积的转换的固有线性性质,这对于处理严重的恢复问题而言不足。最近,CNN的非线性概括称为操作神经网络(ONN),已显示出在AWGN DeNoising上的表现优于CNN。但是,它的配方由固定的著名非线性操作员集合和详尽的搜索负担,以找到给定体系结构的最佳配置,其功效受到固定输出层操作员分配的进一步限制。在这项研究中,我们利用基于泰勒级数的函数近似值提出了一个自我组织的自我组织变体,自我强调,用于图像恢复,该变体综合了作为学习过程的一部分,可以在flly上进行新的节点变换,从而消除了对操作员搜索冗余训练运行的需求。此外,它通过多样化接受场和权重的单个连接来实现更优质的操作员异质性。我们对三个严重的图像恢复任务进行了一系列广泛的消融实验。即使强加了可学习的参数的严格等效性,自我强调在所有问题中都有相当大的余量超过CNN,以PSNR的形式提高了最高3 dB的概括性能。
Discriminative learning based on convolutional neural networks (CNNs) aims to perform image restoration by learning from training examples of noisy-clean image pairs. It has become the go-to methodology for tackling image restoration and has outperformed the traditional non-local class of methods. However, the top-performing networks are generally composed of many convolutional layers and hundreds of neurons, with trainable parameters in excess of several millions. We claim that this is due to the inherent linear nature of convolution-based transformation, which is inadequate for handling severe restoration problems. Recently, a non-linear generalization of CNNs, called the operational neural networks (ONN), has been shown to outperform CNN on AWGN denoising. However, its formulation is burdened by a fixed collection of well-known nonlinear operators and an exhaustive search to find the best possible configuration for a given architecture, whose efficacy is further limited by a fixed output layer operator assignment. In this study, we leverage the Taylor series-based function approximation to propose a self-organizing variant of ONNs, Self-ONNs, for image restoration, which synthesizes novel nodal transformations onthe-fly as part of the learning process, thus eliminating the need for redundant training runs for operator search. In addition, it enables a finer level of operator heterogeneity by diversifying individual connections of the receptive fields and weights. We perform a series of extensive ablation experiments across three severe image restoration tasks. Even when a strict equivalence of learnable parameters is imposed, Self-ONNs surpass CNNs by a considerable margin across all problems, improving the generalization performance by up to 3 dB in terms of PSNR.