论文标题
Mdldroid:一种用于个人移动感测的移动深度学习方法的链条降低方法
MDLdroid: a ChainSGD-reduce Approach to Mobile Deep Learning for Personal Mobile Sensing
论文作者
论文摘要
个人移动传感正在快速渗透我们的日常生活,以实现活动监测,医疗保健和康复。与深度学习相结合,这些应用在近年来取得了巨大的成功。与传统的基于云的范式不同,在设备上进行深度学习提供了几个优点,包括数据隐私保护和模型推理和更新的低延迟响应。由于数据收集在现实中成本高昂,因此Google的联合学习不仅提供完整的数据隐私,而且提供基于多个用户数据的模型鲁棒性。但是,个人移动传感应用程序主要是用户特定的,并且受环境的影响很大。结果,连续的本地变化可能会严重影响联邦学习产生的全球模型的性能。此外,由于资源限制和攻击严重的故障,在本地服务器上部署联合学习可能会迅速到达瓶颈。为了在设备上推动深度学习,我们提出了Mdldtroid,这是一个新颖的分散移动深度学习框架,以使资源感知到个人移动传感应用程序的资源感知设备协作学习。为了解决资源限制,我们提出了一种链条还原方法,其中包括一种新型的链导向的同步随机梯度下降算法,以有效地减少多个设备之间的开销。我们还设计了一种基于代理的多目标增强学习机制,以公平有效的方式平衡资源。我们的评估表明,我们在现成的移动设备上的模型培训比单个设备训练快2倍至3.5倍,而比主奴隶方法快1.5倍。
Personal mobile sensing is fast permeating our daily lives to enable activity monitoring, healthcare and rehabilitation. Combined with deep learning, these applications have achieved significant success in recent years. Different from conventional cloud-based paradigms, running deep learning on devices offers several advantages including data privacy preservation and low-latency response for both model inference and update. Since data collection is costly in reality, Google's Federated Learning offers not only complete data privacy but also better model robustness based on multiple user data. However, personal mobile sensing applications are mostly user-specific and highly affected by environment. As a result, continuous local changes may seriously affect the performance of a global model generated by Federated Learning. In addition, deploying Federated Learning on a local server, e.g., edge server, may quickly reach the bottleneck due to resource constraint and serious failure by attacks. Towards pushing deep learning on devices, we present MDLdroid, a novel decentralized mobile deep learning framework to enable resource-aware on-device collaborative learning for personal mobile sensing applications. To address resource limitation, we propose a ChainSGD-reduce approach which includes a novel chain-directed Synchronous Stochastic Gradient Descent algorithm to effectively reduce overhead among multiple devices. We also design an agent-based multi-goal reinforcement learning mechanism to balance resources in a fair and efficient manner. Our evaluations show that our model training on off-the-shelf mobile devices achieves 2x to 3.5x faster than single-device training, and 1.5x faster than the master-slave approach.