论文标题

计算和统计高效的截断回归

Computationally and Statistically Efficient Truncated Regression

论文作者

Daskalakis, Constantinos, Gouleakis, Themis, Tzamos, Christos, Zampetakis, Manolis

论文摘要

我们为截短的线性回归的经典问题提供了计算和统计上有效的估计器,其中因变量$ y = w^t x +ε$及其相应的r^k $的协变量向量$ x \ in R^k $,只有在某些子集中$ s \ subseteq r $中均揭示了r^k $ in R^k $。否则,$(x,y)$的存在被隐藏。自[Tobin 1958,Amemiya 1973,Hausman and Wise 1977]的早期作品以来,这个问题一直是一个挑战,其应用很丰富,其历史还可以追溯到Galton,Pearson,Lee,Lee和Fisher的作品。虽然已经确定了回归系数的一致估计器,但错误率并不理解,尤其是在高维度中。 在显示样品中的协变量的协方差矩阵下,我们为来自$ n $的系数vector $ w $提供了一个计算有效的估计器,该样品揭示了$ l_2 $误差$ \ tilde $ \ tilde {o} {o}(\ sqrt {k/n})$。我们的估计器使用投影的随机梯度下降(PSGD),而无需替换截短样品的负模拟样本。对于统计上有效的估计,我们只需要Oracle访问设置$ S $.。没有替换的PSGD必须仅限于适当定义的凸锥,以确保负模样的凸面强烈凸,进而可以使用矩阵浓度在具有亚指数尾巴的变量上建立。我们对模拟数据进行实验,以说明估计器的准确性。 作为推论,我们表明SGD学习具有嘈杂激活功能的单层神经网络的参数。

We provide a computationally and statistically efficient estimator for the classical problem of truncated linear regression, where the dependent variable $y = w^T x + ε$ and its corresponding vector of covariates $x \in R^k$ are only revealed if the dependent variable falls in some subset $S \subseteq R$; otherwise the existence of the pair $(x, y)$ is hidden. This problem has remained a challenge since the early works of [Tobin 1958, Amemiya 1973, Hausman and Wise 1977], its applications are abundant, and its history dates back even further to the work of Galton, Pearson, Lee, and Fisher. While consistent estimators of the regression coefficients have been identified, the error rates are not well-understood, especially in high dimensions. Under a thickness assumption about the covariance matrix of the covariates in the revealed sample, we provide a computationally efficient estimator for the coefficient vector $w$ from $n$ revealed samples that attains $l_2$ error $\tilde{O}(\sqrt{k/n})$. Our estimator uses Projected Stochastic Gradient Descent (PSGD) without replacement on the negative log-likelihood of the truncated sample. For the statistically efficient estimation we only need oracle access to the set $S$.In order to achieve computational efficiency we need to assume that $S$ is a union of a finite number of intervals but still can be complicated. PSGD without replacement must be restricted to an appropriately defined convex cone to guarantee that the negative log-likelihood is strongly convex, which in turn is established using concentration of matrices on variables with sub-exponential tails. We perform experiments on simulated data to illustrate the accuracy of our estimator. As a corollary, we show that SGD learns the parameters of single-layer neural networks with noisy activation functions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源