论文标题

为了学习高维度稀疏近似学习的最佳抽样

Towards optimal sampling for learning sparse approximation in high dimensions

论文作者

Adcock, Ben, Cardenas, Juan M., Dexter, Nick, Moraga, Sebastian

论文摘要

在本章中,我们讨论了有关在数据上学习稀疏近似值的最新工作,其中目标函数可能是标量,矢量甚至希尔伯特空间值的。我们的主要目的是研究采样策略如何影响样品复杂性(即足以准确稳定的恢复的样品数量),并使用此见解来获得最佳或近乎最佳的抽样程序。我们考虑两个设置。首先,当知道目标稀疏表示时,在这种情况下,我们会根据从精心设计的概率度量中绘制独立的随机样本来提出一个几乎完整的答案。其次,当这种表示未知时,我们认为更具挑战性的情况。在这种情况下,尽管没有给出完整的答案,但我们描述了对标准蒙特卡洛抽样改进的采样措施的一般结构。我们介绍了使用代数和三角学多项式的示例,对于前者,我们还引入了一种新的程序,以实现不规则(即非态度)域上的功能近似程序。通过数值示例显示了此过程的有效性。最后,我们讨论了许多结构化的稀疏模型,以及它们如何导致更好的近似值。

In this chapter, we discuss recent work on learning sparse approximations to high-dimensional functions on data, where the target functions may be scalar-, vector- or even Hilbert space-valued. Our main objective is to study how the sampling strategy affects the sample complexity -- that is, the number of samples that suffice for accurate and stable recovery -- and to use this insight to obtain optimal or near-optimal sampling procedures. We consider two settings. First, when a target sparse representation is known, in which case we present a near-complete answer based on drawing independent random samples from carefully-designed probability measures. Second, we consider the more challenging scenario when such representation is unknown. In this case, while not giving a full answer, we describe a general construction of sampling measures that improves over standard Monte Carlo sampling. We present examples using algebraic and trigonometric polynomials, and for the former, we also introduce a new procedure for function approximation on irregular (i.e., nontensorial) domains. The effectiveness of this procedure is shown through numerical examples. Finally, we discuss a number of structured sparsity models, and how they may lead to better approximations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源