Such as general Markov chains, there could be continuous time absorbing Markov chains having an infinite state space.
A Markov chain is actually an absorbing chain if:
there is a minimum of one absorbing state and
it is possible to go through any state to a minimum of one absorbing state in a finite number of steps.
Within an absorbing Markov chain, a state which is not absorbing is known as transient.
Quite often, mathematical models can be employed as a principal tool for making informed choices. Markov Chains, specifically markov chain absorbing state, in many cases are useful in making a mathematical model of a circumstance involving experiments with numerous outcomes in which the result of a given trial only depends on the result of the previous trial.
Once the outcome of certain experiment can impact the outcome of the next experiment, the process can be known as a Markov process1 or Markov chain. It is essential to understand that in a Markov process, the outcome of the next experiment is not impacted by any outcome other than the result of the current experiment.
A Markov chain called after Andrey Markov, is a mathematical system that goes through transitions from one state to other on a state space. It is actually a random procedure usually characterized like memory less.
A game of ladders and snakes or some other game whose moves are determined totally by dice is actually a Markov chain, without a doubt, an absorbing Markov chain. This is as opposed to card games for example blackjack, in which the cards symbolize a memory of the past moves.
To find out the difference, think about the probability for a specific event in the game. In the above described dice games, the only thing which matters is the present state of the board. The next state of the board depends upon the current state and the next roll of the dice. It does not rely on how things reached their current state.