Implementation of Multivariate Linear Regression using Gradient Descent (Toy Dataset)
Experiment
Implementation of Multivariate Linear Regression using Gradient Descent (Toy Dataset)
🎯 Aim
To implement multivariate linear regression using gradient descent and evaluate its performance.
🎯 Objectives
- Understand multivariate regression
- Implement gradient descent manually
- Train model on sample dataset
- Compute MSE and R²
- Visualize cost convergence
📖 Theory
🔹 Multivariate Linear Regression Model
Matrix form:
🔹 Cost Function (MSE)
🔹 Gradient Descent Update Rule
🔹 Evaluation Metrics
Mean Squared Error (MSE):
R-squared (R²):
📊 Dataset (Toy Example)
We simulate a housing-like dataset:
| Size (x₁) | Bedrooms (x₂) | Price (y) |
|---|---|---|
| 1000 | 2 | 200 |
| 1500 | 3 | 300 |
| 1800 | 3 | 350 |
| 2400 | 4 | 450 |
| 3000 | 4 | 500 |
📋 Procedure
- Define dataset
- Normalize features (important for GD)
- Add bias term
- Initialize parameters
- Apply gradient descent
- Predict values
- Compute MSE and R²
- Plot cost vs iterations
💻 Program
📊 Output
📈 Interpretation
- Cost decreases over iterations → model is learning
- Low MSE → good fit
- R² ≈ 1 → strong prediction accuracy
📉 Graph Explanation
- X-axis → iterations
- Y-axis → cost
- Curve decreases → convergence
🎯 Key Observations
| Concept | Insight |
|---|---|
| Feature scaling | Improves convergence |
| Gradient descent | Iteratively minimizes error |
| Multivariate model | Uses multiple inputs |
✅ Result
Multivariate linear regression was successfully implemented using gradient descent and evaluated using MSE and R².
- Gradient descent works effectively for multiple variables
- Feature scaling is essential
- Model converges to optimal parameters

Comments
Post a Comment