论文标题
TinySpeech:边缘设备上深层语音识别神经网络的注意力冷凝器
TinySpeech: Attention Condensers for Deep Speech Recognition Neural Networks on Edge Devices
论文作者
论文摘要
深度学习的进步导致了多种语音识别任务的最新表现。然而,深度神经网络的广泛部署以进行启动的语音识别仍然是一个挑战,尤其是在边缘场景中,内存和计算资源受到高度限制(例如,低功耗嵌入式设备)或专门用于语音识别的记忆和计算预算低落的记忆和计算预算(例如,移动设备较低)(移动设备均可识别多个任务识别)。在这项研究中,我们介绍了构建低足迹,高效的深层神经网络的注意力冷凝器的概念,以在边缘上进行设备上的语音识别。注意力冷凝器是一种自我发挥的机制,它可以学习并产生表征局部和跨通道激活关系的凝结嵌入,并相应地表现出选择性的关注。为了说明其功效,我们介绍了tinyspeech,低精度深度神经网络,其中主要由用于使用机器驱动的设计勘探策略量身定制的注意力冷凝器,其中一个专门针对微控制器操作约束。 Google语音命令的实验结果基准数据集用于有限的Vocabulary语音识别表明,TinySpeech网络的建筑复杂性显着降低(高达$ 507 \ tims $少于$较少的参数),较低的计算复杂性,降低了$ 48 \ $ 48 \ $ 48 \ $降低乘以乘以存储的工作,并且在要求$ 2028时(最多$ 2028)。这些结果不仅证明了注意力冷凝器为建立高效的网络以进行启用的语音识别的疗效,而且还阐明了其在边缘上加速深度学习并增强Tinyml应用程序的潜力。
Advances in deep learning have led to state-of-the-art performance across a multitude of speech recognition tasks. Nevertheless, the widespread deployment of deep neural networks for on-device speech recognition remains a challenge, particularly in edge scenarios where the memory and computing resources are highly constrained (e.g., low-power embedded devices) or where the memory and computing budget dedicated to speech recognition is low (e.g., mobile devices performing numerous tasks besides speech recognition). In this study, we introduce the concept of attention condensers for building low-footprint, highly-efficient deep neural networks for on-device speech recognition on the edge. An attention condenser is a self-attention mechanism that learns and produces a condensed embedding characterizing joint local and cross-channel activation relationships, and performs selective attention accordingly. To illustrate its efficacy, we introduce TinySpeech, low-precision deep neural networks comprising largely of attention condensers tailored for on-device speech recognition using a machine-driven design exploration strategy, with one tailored specifically with microcontroller operation constraints. Experimental results on the Google Speech Commands benchmark dataset for limited-vocabulary speech recognition showed that TinySpeech networks achieved significantly lower architectural complexity (as much as $507\times$ fewer parameters), lower computational complexity (as much as $48\times$ fewer multiply-add operations), and lower storage requirements (as much as $2028\times$ lower weight memory requirements) when compared to previous work. These results not only demonstrate the efficacy of attention condensers for building highly efficient networks for on-device speech recognition, but also illuminate its potential for accelerating deep learning on the edge and empowering TinyML applications.