#1
|
|||
|
|||
Convergence in probability vs. almost-sure convergence challenge
A probability theory challenge, for those that know their set-theoretic probability foundations.
If a sequence converges almost surely, this implies that the sequence also converges in probability. The converse is not true. The challenge: define a sequence of RVs that does converge in probability, but does not converge almost-surely. |
#2
|
|||
|
|||
Re: Convergence in probability vs. almost-sure convergence challenge
i have no idea what your talking about.
|
#3
|
|||
|
|||
Re: Convergence in probability vs. almost-sure convergence challenge
Y_1 = 1_[0,1]
Y_2 = 1_[0,1/2], Y_3 = 1_[1/2,1/1] Y_4 = 1_[0,1/4], Y_5 = 1_[1/4,2/4], Y_6 = 1_[2/4,3/4], Y_7= 1_[3/4,1] ... Obviously Y_k converges in probability but not p-almost-surely as Y_k(\omega) wont converge for any \omega. |
#4
|
|||
|
|||
Re: Convergence in probability vs. almost-sure convergence challenge
[ QUOTE ]
Y_1 = 1_[0,1] Y_2 = 1_[0,1/2], Y_3 = 1_[1/2,1/1] Y_4 = 1_[0,1/4], Y_5 = 1_[1/4,2/4], Y_6 = 1_[2/4,3/4], Y_7= 1_[3/4,1] ... Obviously Y_k converges in probability but not p-almost-surely as Y_k(\omega) wont converge for any \omega. [/ QUOTE ] Sixhigh, I'm not sure I understand your notation (UBB has no math mode, heh). What I read for, say, Y_1 is "1 subscript closed-interval 0 to 1". Does this mean that Y_1 is a uniform RV with sample space [0,1]? If so, I don't see how the sequence converges in probability. I'm sure I'm misunderstanding here, can you please clarify? Thanks a bunch. |
#5
|
|||
|
|||
Re: Convergence in probability vs. almost-sure convergence challenge
Every Y_k is a uniform distribution on [0,1].
1_[x,y] is a function that is 1 on the interval [x,y] and zero everywhere else. For Y_k those intervals get thinner as k grows, thus P(|Y_k - 0| > \epsilon) --> 0. We can even define Y_k for all k: Y_{2^n + m} = 1_[m 2^{-n}, (m-1) 2^{-n}] (with 0 <= m < 2^n) It's basically a series of wandering bars that get thinner every 2^n steps. |
#6
|
|||
|
|||
Re: Convergence in probability vs. almost-sure convergence challenge
[ QUOTE ]
Every Y_k is a uniform distribution on [0,1]. 1_[x,y] is a function that is 1 on the interval [x,y] and zero everywhere else. For Y_k those intervals get thinner as k grows, thus P(|Y_k - 0| > \epsilon) --> 0. We can even define Y_k for all k: Y_{2^n + m} = 1_[m 2^{-n}, (m-1) 2^{-n}] (with 0 <= m < 2^n) It's basically a series of wandering bars that get thinner every 2^n steps. [/ QUOTE ] I believe I understand, but let me try to paraphrase to make sure: -Convergence in probability means that the measure of Y_k converges. -Convergence almost-surely means that the expectation of Y_k converges. Correct? Thanks for the help. |
#7
|
|||
|
|||
Re: Convergence in probability vs. almost-sure convergence challenge
[ QUOTE ]
[ QUOTE ] Every Y_k is a uniform distribution on [0,1]. 1_[x,y] is a function that is 1 on the interval [x,y] and zero everywhere else. For Y_k those intervals get thinner as k grows, thus P(|Y_k - 0| > \epsilon) --> 0. We can even define Y_k for all k: Y_{2^n + m} = 1_[m 2^{-n}, (m-1) 2^{-n}] (with 0 <= m < 2^n) It's basically a series of wandering bars that get thinner every 2^n steps. [/ QUOTE ] I believe I understand, but let me try to paraphrase to make sure: -Convergence in probability means that the measure of Y_k converges. -Convergence almost-surely means that the expectation of Y_k converges. Correct? Thanks for the help. [/ QUOTE ] Not quite. Convergence in prob. means that for any set A, P_n(A) converges. In the Y_k example \Omega is [0,1] and P_k, which is the distribution of Y_k, is a distribution on the two point set {0,1}. Here we have P_k({1}) -> 0 and P_k({0}) -> 1 as the bars get thinner and thinner, so P_k converges in prob. Convergence a.s. means for almost all \omega\in\Omega, Y_k(\omega) converges (thought of as a plain old sequence of real numbers). This isn't true in our example since for any \omega\in [0,1], Y_k(\omega) will sometimes be 1 (when the thin bar defining Y_k happens to contain \omega) and sometimes 0. As k->infinity Y_k takes the value 1 on a thinner and thinner bar, but eventually it will hit every \omega\in [0,1] for arbitrarily large k as it wanders around. Thus Y_k doesn't converge a.s. - it'll keep on hitting 1 every now and then as k gets large. Marv |
#8
|
|||
|
|||
Re: Convergence in probability vs. almost-sure convergence challenge
Doesn't like every graduate-level mathematical statistics text have an example of this? I know mine did, I just don't remember what it was.
|
#9
|
|||
|
|||
Re: Convergence in probability vs. almost-sure convergence challenge
[ QUOTE ]
Doesn't like every graduate-level mathematical statistics text have an example of this? I know mine did, I just don't remember what it was. [/ QUOTE ] I think the standard example might be independent {0,1} (Bernoulli) random variables where 1 is hit with probability 1/k. Almost surely, there are infinitely many 0s and 1s, so the sequence almost surely diverges. It converges to the constant 0 in probability. |
#10
|
|||
|
|||
Re: Convergence in probability vs. almost-sure convergence challenge
I understand perfectly now.
Thanks sixhigh and marv. To filsteal: which text did you have? |
|
|