论文标题

具有亚指数数据的广义拉索的通用误差界限

Generic Error Bounds for the Generalized Lasso with Sub-Exponential Data

论文作者

Genzel, Martin, Kipp, Christian

论文摘要

这项工作在次指数数据的假设下对广义拉索进行了非反应分析。我们的主要结果继续对(次)高斯样本分布的基准案例进行了最新研究,从而探讨了哪些结论在超越时仍然有效。尽管许多统计特征仍然不受影响(例如,一致性和误差衰减率),但关键差异表现出了如何测量假设集合的复杂性。事实证明,估计误差可以通过两个复杂性参数来控制,这是由基于通用链接的证明策略自然产生的。输出模型可能是不可交流的,而输入向量的唯一要求是伯恩斯坦类型的通用浓度不平等,可以针对各种亚指数分布实现。这种抽象方法使我们能够为广义套索复制,统一和扩展先前已知的保证。特别是,我们通过举起的拉索向半参数输出模型和相位检索提供了应用。此外,我们的发现是在稀疏恢复和高维估计问题的背景下讨论的。

This work performs a non-asymptotic analysis of the generalized Lasso under the assumption of sub-exponential data. Our main results continue recent research on the benchmark case of (sub-)Gaussian sample distributions and thereby explore what conclusions are still valid when going beyond. While many statistical features remain unaffected (e.g., consistency and error decay rates), the key difference becomes manifested in how the complexity of the hypothesis set is measured. It turns out that the estimation error can be controlled by means of two complexity parameters that arise naturally from a generic-chaining-based proof strategy. The output model can be non-realizable, while the only requirement for the input vector is a generic concentration inequality of Bernstein-type, which can be implemented for a variety of sub-exponential distributions. This abstract approach allows us to reproduce, unify, and extend previously known guarantees for the generalized Lasso. In particular, we present applications to semi-parametric output models and phase retrieval via the lifted Lasso. Moreover, our findings are discussed in the context of sparse recovery and high-dimensional estimation problems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源