论文标题
MMWave的沟通高效的多模式分裂学习收到了电力预测
Communication-Efficient Multimodal Split Learning for mmWave Received Power Prediction
论文作者
论文摘要
这项研究的目的是通过利用相机图像和射频(RF)信号来提高毫米波的准确性,同时以沟通效率和隐私的方式收集图像输入。为此,我们提出了一个分布式的多模式学习(ML)框架,即创建的多模式拆分学习(MULTSL),其中大型神经网络(NN)分为两个无线连接的段。上段结合了图像并接收到未来收到的功率预测的功率,而下段提取了相机图像的功能并压缩其输出以减少通信成本和隐私泄漏。实验评估证实了MULTSL的精度比利用图像或RF信号的基准更高的精度。值得注意的是,与没有压缩的情况相比,在不损害准确性的情况下,减少16倍的较低段输出可减少16倍的通信延迟,而隐私泄漏减少了2.8%。
The goal of this study is to improve the accuracy of millimeter wave received power prediction by utilizing camera images and radio frequency (RF) signals, while gathering image inputs in a communication-efficient and privacy-preserving manner. To this end, we propose a distributed multimodal machine learning (ML) framework, coined multimodal split learning (MultSL), in which a large neural network (NN) is split into two wirelessly connected segments. The upper segment combines images and received powers for future received power prediction, whereas the lower segment extracts features from camera images and compresses its output to reduce communication costs and privacy leakage. Experimental evaluation corroborates that MultSL achieves higher accuracy than the baselines utilizing either images or RF signals. Remarkably, without compromising accuracy, compressing the lower segment output by 16x yields 16x lower communication latency and 2.8% less privacy leakage compared to the case without compression.