1. Show an example of a Markov chain where the limiting distribution reached via repeated applications of equation (12.20) depends on the initial distribution P(0).
2. Consider the following two conditions on a Markov chain T:
a. It is possible to get from any state to any state using a positive probability path in the state graph.
b. For each state x, there is a positive probability of transitioning directly from x to x (a self-loop).
a. Show that, for a finite-state Markov chain, these two conditions together imply that T is regular.
b. Show that regularity of the Markov chain implies condition 1.
c. Show an example of a regular Markov chain that does not satisfy the condition 2.
d. Now let us weaken condition 2, requiring only that there exists a state x with a positive probability of transitioning directly from x to x. Show that this weaker condition and condition 1 together still su‑ce to ensure regularity.