# Latest content was relocated to https://bintanvictor.wordpress.com. This old blog will be shutdown soon.

## Friday, October 21, 2011

### Markov chain - dice problem in XinfengZhou's book

I find it instructive to give distinct colors to the 11 distinct outcomes {2,3,4,5,6,7,8,9,10,11,12}. It's important to simplify notations to reduce numbers.

How do we choose the colors? Since we only care about 7 and 12, I give Tomato color to 7, Blue to 12, and white to all other outcomes. From now on, we work with 3 concrete and memorable colors Tomato/Blue/White and not {2,3,4,5,6,7,8,9,10,11,12}. Much simpler.

Each toss produces the 3 colors with fixed probabilities. At the initial stage, it's not important to compute those 3 probabilities, but I don't like 3 abstract variables like p(tomato), p(blue), p(white). I find it extremely convenient to use some estimate number like p(getting a tomato)=5%, p(blue)=11%, p(white)=1-p(T)-p(B)=84%

Now, we construct a Markov chain diagram. Suppose a robotic ant moves between the stations. The probability of choosing each "exit" is programmed into the ant. From any station, if you add up the prob on each exit they add up to 100%. On P109, at means the prob(ultimately reaching absorbing Station_b | ant is currently at Station_t).
att = 0 because when ant is at Station_tt it can't escape so prob(reaching Station_b) = 0%
aw = p(ant taking exit to station_b) * 100%
+ p(ant taking exit to station_t) * at
+ p(ant taking exit to station_w) * aw
at = p(ant taking exit to Station_b) * 100%
+ p(ant taking exit to Station_tt) * 0%
+ p(ant taking exit to Station_w) * aw
Now with these 2 equations, the 2 unknowns at and aw can be solved. But the equations are rather abstract. That's part of the learning curve on Markov chain