In the last Kelly post we have considered the simplest case when the outcomes of the bets are the i.i.d Bernoulli random variables. How do we compute the optimal ‘s if the outcome of the bet depends on the outcome of the last bet? Our main goal for this post would be to understand what the optimal Kelly is in the following example: given that the last flip of a coin is “heads” the next flip will be “heads” with the probability and “tails” with the probability While the probability of “tails” and “heads” after “tails” is 0.6 and 0.4 correspondingly.
The example above is a typical example of what is called the Markov chain. The Markov chain consists of the state space and a transition probability matrix . For every pair of states the probability of transition from to is given by .
If is any sequence of bet sizes chosen from some bounded set (in reality it is always bounded because of margins) the growth with respect to this betting policy would be defined as where are the results of the bets. Here we always assume we bet on the favorable outcome.
We would like to find such sequence that will maximize . Without going too much into technical details we can prove the following theorems
Theorem 1: There exists an optimal policy that maximizes the growth. This optimal policy may be selected to be stationary, i.e. depends only on the state.
Theorem 2: If we denote by the means of , if the limit exists it is equal to where is the vector of means of and is a stationary distribution.
Let’s see how the theorems work for our example. There are two states which we will denote H and T. The stationary distribution of this Markov chain is given by the vector . From the Theorem 1 the optimal policy is defined by two optimal f’s – one for H and one for T. Denote them by and .
From the Theorem 2 the log growth is given by . To find its maximum we differentiate with respect to and and set the partial derivatives to zero. We can see that in order to maximize the growth we need to maximize each of the Kelly fractions separately – one for H and one for T. We already know that in this case and .