Machine Learning
Deep Learning
Recurrent Neural Networks

Recurrent Neural Networks (RNNs)

RNNs excel at processing sequential data by maintaining internal memory states. In financial services, they power time series forecasting and fraud detection. In retail, they enable demand prediction and customer behavior analysis.

Mathematical Foundation

Vanilla RNN

Process sequential data with hidden state:

Symbol Definitions:

  • = Hidden state at time (network memory)
  • = Hidden-to-hidden weight matrix (state transition)
  • = Input-to-hidden weight matrix (input processing)
  • = Hidden-to-output weight matrix (output generation)
  • = Input at time (current observation)
  • = Output at time (prediction)
  • = Bias vectors

Vanishing Gradient Problem:

Long Short-Term Memory (LSTM)

LSTM Architecture

Solves vanishing gradient with gating mechanisms:

Forget Gate:

Input Gate:

Candidate Values:

Cell State Update:

Output Gate:

Hidden State:

Symbol Definitions:

  • = Forget, input, output gates (control information flow)
  • = Cell state (long-term memory)
  • = Candidate cell state (new information)
  • = Element-wise multiplication (Hadamard product)
  • = Sigmoid activation function
  • = Concatenation of previous hidden state and current input

Financial Services Example: Stock Price Prediction

Business Context: Investment firm uses LSTM to predict stock prices for algorithmic trading, analyzing historical price patterns and market indicators.

Input Features:

  • = Stock price at time
  • = Trading volume at time
  • = Market sentiment score
  • = Economic indicators
  • = Technical indicators (RSI, MACD)

LSTM Architecture:

  • Input: 60-day sequences × 5 features
  • LSTM Layers: 128 → 64 → 32 units
  • Output: Next day price prediction

Mathematical Model:

Multi-step Prediction:

Loss Function (Mean Squared Error):

Risk-Adjusted Performance:

Symbol Definitions:

  • = Predicted price for next trading day
  • = 60-day input sequence
  • = Portfolio returns
  • = Risk-free rate
  • = Standard deviation of returns

Trading Strategy Implementation:

Symbol Definitions:

  • = Trading thresholds (typically ±2%)
  • = Current stock price

Business Performance:

  • Prediction Accuracy: MAPE = 3.2% (vs. 5.1% baseline)
  • Annualized Return: 18.5% (vs. 12.3% market)
  • Maximum Drawdown: 8.9% (vs. 15.2% market)
  • Sharpe Ratio: 1.47 (excellent risk-adjusted performance)

Retail Example: Demand Forecasting

Business Context: Retail chain uses LSTM to predict product demand across multiple stores, optimizing inventory levels and reducing stockouts.

Input Features:

  • = Historical sales quantity
  • = Price and promotions
  • = Weather conditions
  • = Holiday/seasonal indicators
  • = Economic indicators
  • = Competitor pricing

Multi-variate LSTM Model:

Store-Level Forecasting:

Symbol Definitions:

  • = Predicted demand for store , product , horizon
  • = 30-day historical feature sequence

Inventory Optimization:

Symbol Definitions:

  • = Optimal order quantity for store , product
  • = Holding cost per unit
  • = Stockout cost per unit
  • = Predicted demand distribution

Safety Stock Calculation:

Symbol Definitions:

  • = Safety stock level
  • = Z-score for service level (e.g., 1.65 for 95%)
  • = Forecast error standard deviation
  • = Lead time in periods

Business Results:

Forecasting Performance:

  • MAPE: 12.8% (vs. 18.3% traditional methods)
  • Forecast Bias: -0.2% (nearly unbiased)
  • Peak Season Accuracy: 89.1% (vs. 76.4% baseline)

Inventory Optimization:

Cost Impact:

  • Inventory Holding Costs: Reduced by 22%
  • Stockout Costs: Reduced by 35%
  • Total Supply Chain Costs: 2.1M annual savings

Gated Recurrent Unit (GRU)

GRU Architecture

Simplified alternative to LSTM with fewer parameters:

Update Gate:

Reset Gate:

Candidate Activation:

Hidden State Update:

Symbol Definitions:

  • = Update gate (controls how much past information to keep)
  • = Reset gate (controls how much past information to forget)
  • = Candidate hidden state (new information)

Advanced RNN Techniques

Bidirectional RNNs

Process sequences in both directions:

Attention Mechanisms

Focus on relevant parts of input sequence:

Symbol Definitions:

  • = Attention weight for input position at output time
  • = Context vector (weighted sum of hidden states)
  • = Attention energy/score

Implementation Considerations

Sequence-to-Sequence Models

Map input sequences to output sequences:

Encoder:

Decoder:

Symbol Definitions:

  • = Encoder and decoder hidden states
  • = Context vector from encoder
  • = Previous output token

Training Strategies

Teacher Forcing: Use ground truth as decoder input during training Gradient Clipping: Prevent exploding gradients

RNNs provide powerful frameworks for sequential data modeling, enabling sophisticated time series forecasting, natural language processing, and sequential decision making in both financial services and retail applications through their ability to capture temporal dependencies and long-term patterns.


© 2025 Praba Siva. Personal Documentation Site.