# Kelly Betting and Ergodic Theorem.

In the last Kelly post we have considered the simplest case when the outcomes of the bets are the i.i.d Bernoulli random variables. How do we compute the optimal $f$‘s if the outcome of the bet depends on the outcome of the last bet?  Our main goal for this post would be to understand what the optimal Kelly is in the following example: given that the last flip of a coin is “heads” the next flip will be “heads” with the probability $0.8$ and “tails” with the probability $0.2.$ While the probability of “tails”  and “heads” after “tails” is 0.6 and 0.4 correspondingly.

The example above is a typical example of what is called the Markov chain. The Markov chain consists of the state space $S$ and a transition probability matrix $P$. For every pair of states $i, j\in S$ the probability of transition from $i$ to $j$ is given by $P(j|i) = p_{ji}$.

If $f = (f_1, f_2, f_3, \ldots, f_t, \ldots)$ is any sequence of bet sizes chosen from some bounded set (in reality it is always bounded because of margins) the growth with respect to this betting policy would be defined as $g(f) = \lim_{T\to\infty}\frac{1}{T} \sum_{t=0}^{T-1} \log(1+f_t \xi_t)$ where $\xi_t$ are the results of the bets. Here we always assume we bet on the favorable outcome.

We would like to find such sequence $f$ that will maximize $g(f)$. Without going too much into technical details we can prove the following theorems

Theorem 1: There exists an optimal policy $f$ that maximizes the growth. This optimal policy may be selected to be stationary, i.e. depends only on the state.

Theorem 2: If we denote by $a_t$ the means of $x_t$, if the limit $g(f) = \lim_{T\to\infty}\frac{1}{T} \sum_{t=0}^{T-1} \log(1+f_t \xi_t)$ exists it is equal to $g(f) = \mathbb{E}_\pi a$ where $a$ is the vector of means of $\xi$ and $\pi$ is a stationary distribution.

Let’s see how the theorems work for our example. There are two states which we will denote H and T. The stationary distribution of this Markov chain is given by the vector $\pi = (\frac{2}{3}, \frac{1}{3})$. From the Theorem 1 the optimal policy is defined by two optimal f’s – one for H and one for T. Denote them by $f$ and $h$.

From the Theorem 2 the log growth is given by $g(f, h) = \frac{2}{3} (0.8 (1+f) + 0.2 (1-f)) + \frac{1}{3} (0.6 (1+h) + 0.4(1-h))$. To find its maximum we differentiate with respect to $f$ and $h$ and set the partial derivatives to zero. We can see that in order to maximize the growth we need to maximize each of the Kelly fractions separately – one for H and one for T. We already know that in this case $f=0.6$ and $h = 0.1$.