论文标题

部分可观测时空混沌系统的无模型预测

A Survey on Impact of Transient Faults on BNN Inference Accelerators

论文作者

Khoshavi, Navid, Broyles, Connor, Bi, Yu

论文摘要

在过去的几年中,设计人工智能算法的理念已经显着转向自动从大规模数据量中提取可合理系统。大数据蓬勃发展已经加快了这种范式转移,这使我们能够轻松访问和分析高度大的数据集。最知名的大数据分析技术称为深度学习。这些模型需要重要的计算功率和极高的内存访问,这需要设计新方法来降低内存访问并提高功率效率,同时考虑到域特异性硬件加速器的开发,以支持当前和未来的数据尺寸和模型结构。当前的趋势趋势是设计应用程序特定的综合电路的易于维护易于维持复杂的神经计算的必要要求,以实现复杂的静音计算。软错误可能会在硬件加速器中触发内存存储或组合逻辑,从而影响建筑行为,从而使结果的精度落在最小允许的正确性之后。在这项研究中,我们证明软误差对称为二元神经网络的定制深度学习算法的影响可能会导致剧烈的图像错误分类。我们的实验结果表明,图像分类器的准确性可以在LFCW1A1和CNVW1A1网络中分别大幅下降76.70%和19.25%,在CIFAR-10和MNIST数据集中,对于最差的情况下,跨CIFAR-10和MNIST数据集在最差的情况下注入了最差的情况。

Over past years, the philosophy for designing the artificial intelligence algorithms has significantly shifted towards automatically extracting the composable systems from massive data volumes. This paradigm shift has been expedited by the big data booming which enables us to easily access and analyze the highly large data sets. The most well-known class of big data analysis techniques is called deep learning. These models require significant computation power and extremely high memory accesses which necessitate the design of novel approaches to reduce the memory access and improve power efficiency while taking into account the development of domain-specific hardware accelerators to support the current and future data sizes and model structures.The current trends for designing application-specific integrated circuits barely consider the essential requirement for maintaining the complex neural network computation to be resilient in the presence of soft errors. The soft errors might strike either memory storage or combinational logic in the hardware accelerator that can affect the architectural behavior such that the precision of the results fall behind the minimum allowable correctness. In this study, we demonstrate that the impact of soft errors on a customized deep learning algorithm called Binarized Neural Network might cause drastic image misclassification. Our experimental results show that the accuracy of image classifier can drastically drop by 76.70% and 19.25% in lfcW1A1 and cnvW1A1 networks,respectively across CIFAR-10 and MNIST datasets during the fault injection for the worst-case scenarios

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源