Simulating Continuous Markov Chain
12-16
Some students asked me about this question today, which remind me a similar
question raised in this board a while ago.
Basically, continuous Markov chain is usually driven by several stochasitic
processes (e.g., Poisson process), as opposed to discrete Markov chain that
is driven by random generators at certain discrete time. Therefore, if you
know how to simulate a stochasitic process, then it becomes trivial to
simulate continuous Markov chain: simply sampling those processes would
give you the state of the chain.
The simulation of stochastic process is, again, driven by random generator:
the generator gives the random time interval betwee two events in the
process, which in turn leads to a point process that is a sample path of the
corresponding stochastic process. Therefore, somewhat amazingly, you are
able to use a discrete simulator (which only keeps tracking of those discrete
event point on a timeline) to simulate a continuous process, and hence a
continuous chain.
One thing that should be noted is the sampling time to obtain the states of
the Markov chain: sampling a state right at an event is not the same as
sampling it at an arbitrary time, unless the driven process is Poisson (due to
the so called PASTA property). For general (renewal) processes, the link
between these two samplings is the so called Palm calculus, which requires
a whole semester course to explain :)
question raised in this board a while ago.
Basically, continuous Markov chain is usually driven by several stochasitic
processes (e.g., Poisson process), as opposed to discrete Markov chain that
is driven by random generators at certain discrete time. Therefore, if you
know how to simulate a stochasitic process, then it becomes trivial to
simulate continuous Markov chain: simply sampling those processes would
give you the state of the chain.
The simulation of stochastic process is, again, driven by random generator:
the generator gives the random time interval betwee two events in the
process, which in turn leads to a point process that is a sample path of the
corresponding stochastic process. Therefore, somewhat amazingly, you are
able to use a discrete simulator (which only keeps tracking of those discrete
event point on a timeline) to simulate a continuous process, and hence a
continuous chain.
One thing that should be noted is the sampling time to obtain the states of
the Markov chain: sampling a state right at an event is not the same as
sampling it at an arbitrary time, unless the driven process is Poisson (due to
the so called PASTA property). For general (renewal) processes, the link
between these two samplings is the so called Palm calculus, which requires
a whole semester course to explain :)
射频专业培训教程推荐