Latest update Android YouTube

Probability for AI/ML - Chapter 9 Markov Chains & Stochastic Processes

Chapter 9 — Markov Chains & Stochastic Processes

Modeling sequences and time-dependent behavior is crucial in AI/ML for prediction, planning, and reinforcement learning.

9.1 Markov Chains

A Markov chain is a stochastic process where the next state depends only on the current state, not on past states (memoryless property).

Transition Probability Matrix (P): Represents probabilities of moving from one state to another.
Example: For 3 states A, B, C: P = [[0.5, 0.3, 0.2], [0.1, 0.6, 0.3], [0.2, 0.3, 0.5]]

AI/ML context: Used in sequence modeling, text generation, weather prediction, and hidden Markov models (HMMs).

9.2 Stationary Distributions

A stationary distribution is a probability distribution over states that does not change after transitions. It satisfies: π P = π
where π is the stationary distribution vector and P is the transition matrix.

Example: For a 2-state Markov chain P = [[0.7,0.3],[0.4,0.6]], the stationary distribution π ≈ [0.571, 0.429]. AI/ML context: Helps understand long-term behavior of a system and is used in reinforcement learning to evaluate policies.

9.3 Stochastic Processes

A stochastic process is a collection of random variables indexed by time or space. Markov chains are a special type of stochastic process.

Applications in ML: Modeling time series, reinforcement learning, sequential recommendation systems, and predictive maintenance.

9.4 Practical Example in Python

import numpy as np

# Define transition matrix for 3 states
P = np.array([[0.5, 0.3, 0.2],
              [0.1, 0.6, 0.3],
              [0.2, 0.3, 0.5]])

# Initial state distribution
pi = np.array([1, 0, 0])  # starting fully in state 0

# Simulate 5 steps
for step in range(5):
    pi = np.dot(pi, P)
    print(f"Step {step+1}: {pi}")

# Estimating stationary distribution
eigvals, eigvecs = np.linalg.eig(P.T)
stat_dist = eigvecs[:, np.isclose(eigvals, 1)]
stat_dist = stat_dist / np.sum(stat_dist)
print("Stationary distribution:", stat_dist.real.flatten())

9.5 Key Takeaways

  • Markov chains model systems where future state depends only on the current state.
  • Transition matrices represent state-to-state probabilities.
  • Stationary distributions describe long-term behavior of the process.
  • Stochastic processes generalize sequences of random variables, critical for time-dependent AI/ML tasks.

Next chapter: Practical Probability in Python — implementing probabilistic models and simulations for AI/ML.

Post a Comment

Feel free to ask your query...
Cookie Consent
We serve cookies on this site to analyze traffic, remember your preferences, and optimize your experience.
Oops!
It seems there is something wrong with your internet connection. Please connect to the internet and start browsing again.
AdBlock Detected!
We have detected that you are using adblocking plugin in your browser.
The revenue we earn by the advertisements is used to manage this website, we request you to whitelist our website in your adblocking plugin.
Site is Blocked
Sorry! This site is not available in your country.