论文标题
通过表演驱动的BP和边缘决策的变体PSO学习的浅神经网络的可学习性和鲁棒性
Learnability and Robustness of Shallow Neural Networks Learned With a Performance-Driven BP and a Variant PSO For Edge Decision-Making
论文作者
论文摘要
在许多情况下,计算资源受到限制,没有GPU受益,尤其是在启用IoT系统的边缘设备中。在边缘设备中实现复杂的AI模型可能并不容易。通用近似定理指出,浅神经网络(SNN)可以代表任何非线性函数。但是,SNN的脂肪足以解决边缘设备中的非线性决策问题?在本文中,我们专注于SNN的可学习性和鲁棒性,该性质通过一种贪婪的紧密力量启发式算法(性能驱动的BP)和松散的力元元算法(PSO的变体)获得。进行了两组实验以检查SNN具有SIGMoid激活的可学习性和鲁棒性,由KPI-PDBPS和KPI-VPSOS学习/优化,其中KPI(关键性能指标:错误(ERR),准确性(ACC)和$ f_1 $得分)是目标,驱动搜索过程。采用增量方法来检查隐藏的神经元数对SNN的性能的影响,这是由KPI-PDBPS和KPI-VPSOS所学/优化的。从工程前瞻性中,所有传感器对于一项特定任务都是有道理的。因此,所有传感器读数都应与目标密切相关。因此,SNN的结构应取决于问题空间的尺寸。实验结果表明,直到问题空间的维度数量的隐藏神经元的数量就足够了。 KPI-PDBP生产的SNN的可学习性比KPI-VPSO优化的SNN的可学习性要好,在培训数据集上的性能和学习时间; KPI-PDBP和KPI-VPSO所学到的SNN的鲁棒性取决于数据集。与其他经典的机器学习模型相比,ACC-PDBPS几乎所有测试的数据集获胜。
In many cases, the computing resources are limited without the benefit from GPU, especially in the edge devices of IoT enabled systems. It may not be easy to implement complex AI models in edge devices. The Universal Approximation Theorem states that a shallow neural network (SNN) can represent any nonlinear function. However, how fat is an SNN enough to solve a nonlinear decision-making problem in edge devices? In this paper, we focus on the learnability and robustness of SNNs, obtained by a greedy tight force heuristic algorithm (performance driven BP) and a loose force meta-heuristic algorithm (a variant of PSO). Two groups of experiments are conducted to examine the learnability and the robustness of SNNs with Sigmoid activation, learned/optimised by KPI-PDBPs and KPI-VPSOs, where, KPIs (key performance indicators: error (ERR), accuracy (ACC) and $F_1$ score) are the objectives, driving the searching process. An incremental approach is applied to examine the impact of hidden neuron numbers on the performance of SNNs, learned/optimised by KPI-PDBPs and KPI-VPSOs. From the engineering prospective, all sensors are well justified for a specific task. Hence, all sensor readings should be strongly correlated to the target. Therefore, the structure of an SNN should depend on the dimensions of a problem space. The experimental results show that the number of hidden neurons up to the dimension number of a problem space is enough; the learnability of SNNs, produced by KPI-PDBP, is better than that of SNNs, optimized by KPI-VPSO, regarding the performance and learning time on the training data sets; the robustness of SNNs learned by KPI-PDBPs and KPI-VPSOs depends on the data sets; and comparing with other classic machine learning models, ACC-PDBPs win for almost all tested data sets.