论文标题
在有限混合模型下的最大似然估计的精制收敛速率
Refined Convergence Rates for Maximum Likelihood Estimation under Finite Mixture Models
论文作者
论文摘要
我们重新审视有限混合模型中最大似然估计量(MLE)的收敛速率的经典问题。 Wasserstein距离已成为分析这些模型参数估计的标准损耗函数,部分原因是其绕过标签切换的能力并准确地表征了具有消失权重的拟合混合物组件的行为。但是,瓦斯汀距离只能捕获其余拟合混合物组件中最差的案例收敛速率。我们证明,当对数似然函数受到惩罚以阻止消失的混合权重时,可以得出更强大的损失功能以解决Wasserstein距离的这一缺点。这些新的损失功能准确地捕获了拟合混合物组件的收敛速率的异质性,并且我们使用它们在各种混合模型中使用它们来锐化现有和均匀的收敛速率。特别是,这些结果表明,受惩罚MLE的组成部分的子集通常比过去的工作预期的要快得多。我们进一步表明,其中一些结论扩展到了传统的MLE。我们的理论发现得到了一项模拟研究的支持,以说明这些提高的收敛速率。
We revisit the classical problem of deriving convergence rates for the maximum likelihood estimator (MLE) in finite mixture models. The Wasserstein distance has become a standard loss function for the analysis of parameter estimation in these models, due in part to its ability to circumvent label switching and to accurately characterize the behaviour of fitted mixture components with vanishing weights. However, the Wasserstein distance is only able to capture the worst-case convergence rate among the remaining fitted mixture components. We demonstrate that when the log-likelihood function is penalized to discourage vanishing mixing weights, stronger loss functions can be derived to resolve this shortcoming of the Wasserstein distance. These new loss functions accurately capture the heterogeneity in convergence rates of fitted mixture components, and we use them to sharpen existing pointwise and uniform convergence rates in various classes of mixture models. In particular, these results imply that a subset of the components of the penalized MLE typically converge significantly faster than could have been anticipated from past work. We further show that some of these conclusions extend to the traditional MLE. Our theoretical findings are supported by a simulation study to illustrate these improved convergence rates.