论文标题
使用分布式机器学习的保护对象检测和本地化隐私性检测和本地化:婴儿眼链调理的案例研究
Privacy-Preserving Object Detection & Localization Using Distributed Machine Learning: A Case Study of Infant Eyeblink Conditioning
论文作者
论文摘要
由于隐私,计算可扩展性和带宽容量,分布式机器学习正在成为一种流行的模型训练方法。在这项工作中,我们探索了对象检测中常用的两种算法的可扩展分布式训练版本。提出了一种基于基于定向梯度(HOG)直方图的线性支持向量机(L-SVM)对象检测的新型分布式训练算法(MWMA)。此外,提出了一种新型的加权箱聚集(WBA)算法,用于分布回归树(ERT)地标定位集合的分布式训练。两种算法都不限制模型聚合的位置,并允许自定义架构进行模型分布。对于这项工作,探索了两种算法的基于池的本地培训和聚合(PBLTA)体系结构。使用心理学和神经科学领域的范式检查两种算法在医学领域中的应用 - 与婴儿的互联网调节 - 在保护参与者隐私的同时,需要对面部图像进行培训。使用分布式学习,可以训练模型而无需向其他节点发送图像数据。该自定义软件已在GitHub上可供公开使用:https://github.com/slwzwaard/dmt。结果表明,使用MWMA的猪算法的模型的聚合不仅保留了模型的准确性,而且还允许与传统学习相比,准确性增加0.9%。此外,与单节点模型相比,WBA允许使用ERT模型聚集,精度增加了8%。
Distributed machine learning is becoming a popular model-training method due to privacy, computational scalability, and bandwidth capacities. In this work, we explore scalable distributed-training versions of two algorithms commonly used in object detection. A novel distributed training algorithm using Mean Weight Matrix Aggregation (MWMA) is proposed for Linear Support Vector Machine (L-SVM) object detection based in Histogram of Orientated Gradients (HOG). In addition, a novel Weighted Bin Aggregation (WBA) algorithm is proposed for distributed training of Ensemble of Regression Trees (ERT) landmark localization. Both algorithms do not restrict the location of model aggregation and allow custom architectures for model distribution. For this work, a Pool-Based Local Training and Aggregation (PBLTA) architecture for both algorithms is explored. The application of both algorithms in the medical field is examined using a paradigm from the fields of psychology and neuroscience - eyeblink conditioning with infants - where models need to be trained on facial images while protecting participant privacy. Using distributed learning, models can be trained without sending image data to other nodes. The custom software has been made available for public use on GitHub: https://github.com/SLWZwaard/DMT. Results show that the aggregation of models for the HOG algorithm using MWMA not only preserves the accuracy of the model but also allows for distributed learning with an accuracy increase of 0.9% compared with traditional learning. Furthermore, WBA allows for ERT model aggregation with an accuracy increase of 8% when compared to single-node models.