Markov Chain

Mehmet Akif Cifci
2 min readFeb 5, 2023

--

A Markov chain is a mathematical model used to predict future states of a system based on the current state and the past states of the system. It is a type of random process that satisfies the Markov property, which states that the future state depends only on the current state and not on the previous states.

Here is an example to help illustrate Markov chains:

Suppose you have a weather prediction model that uses a Markov chain. The current state of the system is the current weather (e.g., sunny, cloudy, or rainy), and the possible future states are the future weather conditions (e.g., sunny, cloudy, or rainy). The transition probabilities between the states are based on historical data of weather patterns. For example, if it’s currently sunny, there’s a 70% chance it will be sunny tomorrow, a 20% chance it will be cloudy, and a 10% chance it will be rainy.

Another example is a model of a person’s behavior, where the current state is the activity they are currently doing (e.g., working, sleeping, or watching TV) and the future states are the activities they will do next. The transition probabilities are based on the person’s habits and routines. For example, if the person is currently working, there’s a 60% chance they will sleep next, a 30% chance they will watch TV, and a 10% chance they will continue working.

In essence, Markov chains are used to model systems that change over time, where the future state depends only on the current state and not on the previous states.

Algorithm for mcm

Define the states: Identify the possible states of the system and create a state space.

Define the transition probabilities: Based on the historical data or other information, estimate the probabilities of transitioning from one state to another. These probabilities can be represented as a transition matrix.

Initialize the current state: Choose a starting state, which can be based on the current state of the system or randomly selected from the state space.

Repeat the following steps for a specified number of steps or until a stopping criterion is met:

a. Use the transition matrix to calculate the probability of transitioning from the current state to each of the other states.

b. Based on these probabilities, choose the next state using a random number generator.

c. Update the current state to the new state.

Use the final state to make predictions about future states or other relevant quantities.

import numpy as np

def generate_next_state(current_state, transition_matrix):
"""Generate the next state based on the current state and transition matrix."""
state_probabilities = transition_matrix[current_state, :]
next_state = np.random.choice(np.arange(len(state_probabilities)), p=state_probabilities)
return next_state

def simulate_markov_chain(initial_state, transition_matrix, num_steps):
"""Simulate a Markov chain for a specified number of steps."""
current_state = initial_state
states = [current_state]
for i in range(num_steps):
next_state = generate_next_state(current_state, transition_matrix)
states.append(next_state)
current_state = next_state
return states

# Define the transition matrix
transition_matrix = np.array([[0.7, 0.2, 0.1], [0.3, 0.5, 0.2], [0.1, 0.3, 0.6]])

# Initialize the current state
initial_state = 0

# Simulate the Markov chain for 10 steps
states = simulate_markov_chain(initial_state, transition_matrix, 10)

# Print the resulting states
print(states)

--

--

Mehmet Akif Cifci
Mehmet Akif Cifci

Written by Mehmet Akif Cifci

Mehmet Akif Cifci holds the position of associate professor in the field of computer science in Austria.

No responses yet