
Contents
Next: A special case.
Up: No Title
Previous: Introduction
![[Annotate]](/organics/icons/sannotate.gif)
![[Shownotes]](../gif/annotate/shide-21.gif)
We assume an infinite deck of cards, with equal distributions of each
of the m face values. Suppose we know the
dealer's and the
player's sequences of special cards,
and
repectively. Define
to be the distance between the dealer's secret card
and the
player's next closest secret card into the deck. Clearly
,
and we define the states of our Markov process to be these m
distances. The absorbing state is the distance 0 (For a good treatment
of Markov Processes, see [1]).
Let
be the transition matrix. If m is the
number of states (i.e. face cards) we let
and
denote the index entries
of M, in contrast with the standard convention. This enumeration
of M's entries is more natural for us because it clarifies the fact
that
is the probability that if for some k
the distance
, then
.
Theorem 1
For all 0 < i < m-1, we have
(a)
.
(b)
for
.
(c)
and
for j>0.
(d)
,
for
, and
.
[Proof]
The closed form of the transition matrix M now follows easily.
Corollary 1
(a)
for i > j.
(b)
.
(c)
for i < j.
(d) The initial vector I is given by

[Proof]
Corollary 2
The characteristic polynomial of M is

[Proof]

Contents
Next: A special case.
Up: No Title
Previous: Introduction