论文标题

alpha-nml通用预测指标

Alpha-NML Universal Predictors

论文作者

Bondaschi, Marco, Gastpar, Michael

论文摘要

受到普遍预测中的经典遗憾措施与RényiDivergence中的经典遗憾措施之间的联系,我们引入了一类新的通用预测指标,这些预测指标依赖于实际参数$α\ geq 1 $。该类插入了两个众所周知的预测因子,即混合估计器,其中包括拉普拉斯和Krichevsky-Trofimov预测指标,以及归一化的最大可能性(NML)估计量。我们指出了这一新的预测因素的一些优势,并从两个互补的角度研究了其好处:(1)当将最大的rényi差异视为一种遗憾措施时,我们证明了它的最佳性,可以将其解释为标准平均平均水平和恐惧期次遗憾措施之间的中间立场; (2)我们讨论当NML不是可行的选择时如何使用它,以替代其他预测因子,例如幸运的NML。最后,我们将$α$ -NML预测变量应用于离散的无内存源(DMS)等级,我们在其中得出简单的公式来计算预测变量并根据最坏的遗憾分析其渐近性能。

Inspired by the connection between classical regret measures employed in universal prediction and Rényi divergence, we introduce a new class of universal predictors that depend on a real parameter $α\geq 1$. This class interpolates two well-known predictors, the mixture estimators, that include the Laplace and the Krichevsky-Trofimov predictors, and the Normalized Maximum Likelihood (NML) estimator. We point out some advantages of this new class of predictors and study its benefits from two complementary viewpoints: (1) we prove its optimality when the maximal Rényi divergence is considered as a regret measure, which can be interpreted operationally as a middle ground between the standard average and worst-case regret measures; (2) we discuss how it can be employed when NML is not a viable option, as an alternative to other predictors such as Luckiness NML. Finally, we apply the $α$-NML predictor to the class of discrete memoryless sources (DMS), where we derive simple formulas to compute the predictor and analyze its asymptotic performance in terms of worst-case regret.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源