View Single Post
  #7  
Old 04-11-2007, 11:24 AM
jason1990 jason1990 is offline
Senior Member
 
Join Date: Sep 2004
Posts: 932
Default Re: April 2007 IBM Ponder This Challenge

I get a different answer. Let S_n be the position of the frog, with S_0 = 0. Let A(k) be the event that the frog eventually visits site k, and let I(k) = 1 if A(k) occurs and 0 otherwise. Define

R_L = (2L)^{-1}\sum_{k=-L}^L I(k).

Technically, we must prove that this converges almost surely as L goes to infinity, and calculate the limit. Let us first take expectations:

ER_L = (2L)^{-1}\sum_{k=-L}^L P(A(k)).

For k > 0, let t(k) be the first time S_n hits -k, and let s(k) be the first time S_n hits or passes k. Because of the upward drift, S_n goes to infinity almost surely. Hence, s(k) is finite with probability one, but there is a positive probability that t(k) is infinity. Define

q_k := P(A(-k)) = P(t(k) finite)
p_k := P(S_{s(k)} = k + 1).

Note that p_k is the probability that the frog passes k on the way up. Hence, by the Markov property,

P(A(k)) = (1 - p_k) + p_k q_1.

We therefore have

(*) ER_L = (2L)^{-1}[1 + \sum_{k=1}^L q_k
+ \sum_{k=1}^L (1 - p_k + q_1 p_k)].

Now to compute q_k and p_k. Let f = (-1 + \sqrt{5})/2 = 0.618, and define M_n = f^{S_n}. It is easy to check that M_n is a martingale. By standard methods (using optional sampling and dominated convergence), this shows that q_k = f^k.

For p_k, note that

p_1 = 1/2 + (1/2)p_2
p_2 = (1/2)p_3
p_n = (1/2)p_{n-2} + (1/2)p_{n+1}, for n > 2.

If we define p_0 = 0 and p_{-1} = 1, then we have

p_{n+1} = 2p_n - p_{n-2}, for n > 0.

The general solution to this recurrence is

p_n = c_1 + c_2 f^{-n} + c_3 (-f)^n.

Since each p_n is a probability (and thus between 0 and 1), we must have c_2 = 0. Using p_0 = 0 and p_{-1} = 1, we get c_1 = f^2 and c_3 = -f^2. Hence,

p_n = f^2 + (-1)^{n+1} f^{n+2}.

(Note that this implies the limiting passing probability is f^2 = 0.382.) Finally, using (*), we get

lim ER_L = (1/2)(1 - f^2 + q_1 f^2)
= (1/2)(1 - f^2 + f^3)
= (-5 + 3\sqrt{5})/4 = 0.427.

Some additional argument is now required to show that R_L converges almost surely to this limit. I suspect this can be done using straightforward estimates on the covariances of the I(k)'s.
Reply With Quote