论文标题

EQ-NET:统一的深度学习框架,用于对数类样比率估计和量化

EQ-Net: A Unified Deep Learning Framework for Log-Likelihood Ratio Estimation and Quantization

论文作者

Arvinte, Marius, Tewfik, Ahmed H., Vishwanath, Sriram

论文摘要

在这项工作中,我们介绍了EQ-NET:使用数据驱动的方法解决对数字可能比率(LLR)估计的第一个整体框架(LLR)估计。我们通过理论见解对复杂性光谱末端的两种实际估计算法的理论见解激励我们的方法,并揭示了算法的复杂性与信息瓶颈方法之间的联系:更简单的算法在代表其解决方案时承认较小的瓶颈。这促使我们提出了一种两阶段算法,该算法使用LLR压缩作为估算的借口任务,并专注于通过深层神经网络进行低延迟,高性能实现。我们进行了广泛的实验评估,并证明与以前的方法相比,我们的单个体系结构在这两个任务上都能达到最先进的结果,并且在以通用目的和图形处理单位(GPU)进行测量时,量化效率的提高高至$ 20 \%$,并将估计潜伏期降低到$ 60 \%$。特别是,我们的方法在几个多输入多输出(MIMO)配置中将GPU推断潜伏期降低了两次以上。最后,我们证明我们的方案对分布偏移具有鲁棒性,并在5G通道模型以及通道估计误差上进行评估时保留了其性能的重要组成部分。

In this work, we introduce EQ-Net: the first holistic framework that solves both the tasks of log-likelihood ratio (LLR) estimation and quantization using a data-driven method. We motivate our approach with theoretical insights on two practical estimation algorithms at the ends of the complexity spectrum and reveal a connection between the complexity of an algorithm and the information bottleneck method: simpler algorithms admit smaller bottlenecks when representing their solution. This motivates us to propose a two-stage algorithm that uses LLR compression as a pretext task for estimation and is focused on low-latency, high-performance implementations via deep neural networks. We carry out extensive experimental evaluation and demonstrate that our single architecture achieves state-of-the-art results on both tasks when compared to previous methods, with gains in quantization efficiency as high as $20\%$ and reduced estimation latency by up to $60\%$ when measured on general purpose and graphical processing units (GPU). In particular, our approach reduces the GPU inference latency by more than two times in several multiple-input multiple-output (MIMO) configurations. Finally, we demonstrate that our scheme is robust to distributional shifts and retains a significant part of its performance when evaluated on 5G channel models, as well as channel estimation errors.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源