论文标题

线性增强如何影响Donsker的经验过程定理

How linear reinforcement affects Donsker's Theorem for empirical processes

论文作者

Bertoin, Jean

论文摘要

H.A.引入的增强算法Simon \ cite {Simon}产生一系列带有内存的统一随机变量。在每个步骤中,以固定的概率$ p \ in(0,1)$,$ \ hat u_ {n+1} $从$ \ hat u_1,\ ldots,\ hat u_n $中均匀地采样,并且具有互补的概率$ 1-p $,$ \ hat u_ hat u_ {n+1} $是一个新独立的独立变量。 Glivenko-Cantelli定理对于加强的经验措施仍然有效,但唐斯克定理无效。具体而言,我们表明,经验过程的序列在法律上仅在$ p <1/2 $时将其与恒定因子收敛,并且当$ p> 1/2 $且极限是具有可交换增量和不连续路径的桥梁时,需要进一步重新进行。这与早期相关的伯努利过程,所谓的大象随机步行以及更一般而言的阶梯加强随机步行有关。

A reinforcement algorithm introduced by H.A. Simon \cite{Simon} produces a sequence of uniform random variables with memory as follows. At each step, with a fixed probability $p\in(0,1)$, $\hat U_{n+1}$ is sampled uniformly from $\hat U_1, \ldots, \hat U_n$, and with complementary probability $1-p$, $\hat U_{n+1}$ is a new independent uniform variable. The Glivenko-Cantelli theorem remains valid for the reinforced empirical measure, but not the Donsker theorem. Specifically, we show that the sequence of empirical processes converges in law to a Brownian bridge only up to a constant factor when $p<1/2$, and that a further rescaling is needed when $p>1/2$ and the limit is then a bridge with exchangeable increments and discontinuous paths. This is related to earlier limit theorems for correlated Bernoulli processes, the so-called elephant random walk, and more generally step reinforced random walks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源