Markov Models and Hidden Markov Models are excellent tools to help model stochastic processes like random walks with stock price changes in a market. Their utility gives us. set of probabilities we can use to determine whether we enter or exit a position.
In the world of trading, we use trading indicators, momentum, trends, and other methods to try and infer what the price action will be in the short term future. Based on the work of Louis Bachelier, financial markets are perceived to be random walks, however he also believed in what is known as an efficient market hypothesis, that the current price is neither bullish nor bearish but the net effect of those two forces that have settled on a price.
Any day trader knows that this isnβt the case and there are always other factors that come into play, determining the direction of a stock. There are several dependent variables that influence speculation, moving price in various directions. To determine the probability of a potential position netting us a greater probability of outcome in our favor, we need a model that will realistically project the future based on the current state.
Enter: Markov Models and Hidden Markov Models.
Markov Models
When modeling seemingly random behavior, we do so through a process known as a stochastic process. These stochastic processes have randomness and uncertainty injected into the very fabric of the models. The stochastic process indexes discrete instances of time when plotting the states as time passes.
A Markov model is a mathematical framework using the stochastic process that describes a system where the future state depends only on the current state, not on how the system arrived at that state. In the stochastic process, this is because of a property known as the Markov property, which focuses solely on the current price rather than relying on past prices. This would be at odds with a method of projecting a stock price using linear regression, which takes all prior data to predict a high probability price at a given point in time.
To model this βlimited horizonβ, where we can only infer with reasonable probability the price at t based on the price at t-1, it would look as follows:
This assumption is an Order-1 Markov process, where an order-k Markov process assumes there is no direct causation between the state z at time t from the states that are k+1 time steps before it. Now that weβve seen how only the prior time increment influences the next price, we want to show that the underlying statistics behind the system itself do not change. We do this so that we can be more consistent in our predictions β itβs like in machine learning where the model has learned a certain relationship among the various features, this implies the same systemic, statical features are preserved. If you see below, the formula showcases this by taking two discrete time steps and comparing them, showing the conditional probabilities remain unchanged:
The aim here is to simplify the natural complexity of the myriad of inputs that would drive price action and instead use the price of the asset as it currently stands and not account for the past. In order to understand the probability of certain actions happening in a 1-step sequence, we would use what is called a state transition matrix.
Going from state i to state j, there are many different prices that can occur. The outcome we are looking for will be a probability distribution, so we will create a matrix across all permutations of prices that can occur. Letβs take for example the stock price for PLTR, which is $15.98
i / j | $15.98 | $15.99 | $16.00 | $16.01 | $16.02 |
$15.98 | 0.25 | 0.25 | 0.25 | 0.25 | |
$15.99 | 0 | 0.7 | 0.1 | 0.1 | 0.1 |
$16.00 | 0 | 0.3 | 0.5 | 0.1 | 0.1 |
$16.01 | 0 | 0.2 | 0.1 | 0.6 | 0.1 |
$16.02 | 0 | 0.1 | 0.2 | 0.2 | 0.5 |
Probability of Sequences
If we were to predict the likelihood of the the price action being as follows:
$15.98 β $15.99 β $16.00 β $16.01 β $16.02
which would imply that this is a bullish position that is going to rally. We will want to rewrite this equation using its probabilities as follows:
We would use the chain rule of probability to multiply those probabilities together as follows:
So the likelihood of this exact sequence in price action is 0.025%.
Using Markov Models in Python notebooks
Below is the code sequence I use to initialize and generate a Markov Model, with the example of using one yearβs historical data of closing prices for TSLA and generating the next 10 sequences.
import numpy as np
import yfinance as yf
class MarkovModel:
def __init__(self, historical_prices, initial_state):
"""
Initialize the Markov model.
Parameters:
- historical_prices: List of historical stock prices.
- initial_state: The initial state of the system.
"""
self.historical_prices = historical_prices
self.current_state = initial_state
self.num_states = len(historical_prices)
def next_state(self):
"""
Transition to the next state based on the historical stock prices.
"""
price_changes = np.diff(self.historical_prices)
probabilities = np.exp(price_changes) / np.sum(np.exp(price_changes))
next_state = np.random.choice(range(self.num_states - 1), p=probabilities)
self.current_state = next_state
return next_state
# Download TSLA historical data from Yahoo Finance
symbol = "TSLA"
start_date = "2023-01-01"
end_date = "2024-01-01"
data = yf.download(symbol, start=start_date, end=end_date)
# Extract closing prices
closing_prices = data['Close'].values
# Normalize the prices to calculate price changes
normalized_prices = (closing_prices - np.mean(closing_prices)) / np.std(closing_prices)
# Initial state based on the first observed price
initial_state = 0
# Create a Markov model
markov_model = MarkovModel(normalized_prices, initial_state)
# Generate a sequence of states
num_steps = 10
state_sequence = [markov_model.next_state() for _ in range(num_steps)]
# Display the generated sequence
print("Generated State Sequence:", state_sequence)
Hidden Markov Models
The states of the system in this case are directly observable, while its counterpart, Hidden Markov Models, do not have directly observable system states.
When we say observable and non-observable states, we mean that:
- Observable states can be something like stock price, interest rates, events on the economic calendar.
- Non-observable states can be something like investor sentiment or credit worthiness of the borrower.
To break down this down further:
Hidden Markov Model = Hidden Markov Chain + Observed Variables