论文标题

ADMA:神经网络的灵活损失功能

Adma: A Flexible Loss Function for Neural Networks

论文作者

Shrivastava, Aditya

论文摘要

人们对人工神经网络(ANN)的兴趣高度提高,导致其结构的广泛改善。在这项工作中,我们想到的想法是,当前可用的损失功能是默认情况下,而不是静态损失功能,而不是静态损失功能。灵活的损失函数可以成为神经网络的更有见地的导航器,从而提高收敛率,从而更快地达到最佳精度。帮助确定灵活性程度的见解可以从ANN的复杂性,数据分布,超参数选择等。在此之后,我们引入了神经网络的新型灵活损失函数。该函数显示出表征一系列唯一属性的范围,从中,其他损耗函数的许多属性只是子集,并且在函数中的灵活性参数改变了其允许其模拟损失曲线和普遍静态损失函数的学习行为。使用损失函数进行的广泛实验表明,它能够在选定的数据集上给出最先进的性能。因此,在灵活性本身和拟议功能的所有想法中,它具有开放深度学习研究中新有趣章节的潜力。

Highly increased interest in Artificial Neural Networks (ANNs) have resulted in impressively wide-ranging improvements in its structure. In this work, we come up with the idea that instead of static plugins that the currently available loss functions are, they should by default be flexible in nature. A flexible loss function can be a more insightful navigator for neural networks leading to higher convergence rates and therefore reaching the optimum accuracy more quickly. The insights to help decide the degree of flexibility can be derived from the complexity of ANNs, the data distribution, selection of hyper-parameters and so on. In the wake of this, we introduce a novel flexible loss function for neural networks. The function is shown to characterize a range of fundamentally unique properties from which, much of the properties of other loss functions are only a subset and varying the flexibility parameter in the function allows it to emulate the loss curves and the learning behavior of prevalent static loss functions. The extensive experimentation performed with the loss function demonstrates that it is able to give state-of-the-art performance on selected data sets. Thus, in all the idea of flexibility itself and the proposed function built upon it carry the potential to open to a new interesting chapter in deep learning research.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源