Markov chain

Mehmet Akif Cifci
2 min readMar 8, 2023

--

A Markov chain or Markov process is a mathematical model used to describe a sequence of events where the probability of each event occurring depends only on the current state of the system and not on any previous states. This means that the future state of the system is only dependent on the current state, and not on any historical information about how the system arrived at the current state.

Brilliant

To put it simply, a Markov chain is a model that describes a system that evolves over time in a way that is unpredictable, except in the sense that the future state of the system depends only on the current state of the system, and not on any previous history.

The idea of a Markov chain can be applied to many different fields, such as finance, physics, biology, and computer science. For example, a Markov chain might be used to model the behavior of a stock market, where the price of a stock at any given time is determined by the current market conditions and not by past market conditions.

In general, a Markov chain can be characterized by its state space, which is the set of all possible states that the system can be in, and its transition matrix, which is a matrix that describes the probability of transitioning from one state to another state. The transition matrix is used to calculate the probability of the system being in a particular state at a given time, based on its previous state.

Finally, Markov chains provide a powerful framework for understanding and modeling the behavior of complex systems, and have many applications in science, engineering, and other fields.

import random

class MarkovChain:
def __init__(self, order=1):
self.order = order
self.transitions = {}
self.states = []

def train(self, sequence):
for i in range(len(sequence)):
if i < self.order:
context = tuple([None]*(self.order-i) + sequence[:i])
else:
context = tuple(sequence[i-self.order:i])
symbol = sequence[i]
if context not in self.transitions:
self.transitions[context] = {}
if symbol not in self.transitions[context]:
self.transitions[context][symbol] = 0.0
self.transitions[context][symbol] += 1.0
self.states = list(set(sequence))

def generate(self, length):
state = random.choice(self.states)
output = [state]
for i in range(length-1):
context = tuple(output[-self.order:])
if context not in self.transitions:
state = random.choice(self.states)
else:
distribution = self.transitions[context]
total = sum(distribution.values())
if total == 0:
state = random.choice(self.states)
else:
r = random.random() * total
s = 0.0
for symbol, prob in distribution.items():
s += prob
if s > r:
state = symbol
break
output.append(state)
return ''.join(output)

model = MarkovChain(order=2)
sequence = "ACTGACTGACTG"
model.train(sequence)
new_sequence = model.generate(10)
print(new_sequence)

--

--

Mehmet Akif Cifci
Mehmet Akif Cifci

Written by Mehmet Akif Cifci

Mehmet Akif Cifci holds the position of associate professor in the field of computer science in Austria.

No responses yet