Simple Linear Regression using Gradient Descent
Experiment
Simple Linear Regression using Gradient Descent
Aim
To implement Simple Linear Regression using Gradient Descent on sample data and evaluate the model using MSE and R².
Objectives
-
Understand Gradient Descent optimization
-
Iteratively compute regression coefficients
-
Compare with analytical solution
-
Compute MSE and R² manually
-
Visualize regression line
🛠️ Tools Required
-
Python
-
NumPy
-
Matplotlib
📖 Theory
🔹 Linear Regression Model
🔹 Cost Function (MSE)
🔹 Gradient Descent
Gradient Descent minimizes the cost function iteratively:
Where:
-
= learning rate
-
Updates continue until convergence
📋 Procedure
-
Initialize parameters
-
Choose learning rate and number of iterations
-
Compute predictions
-
Calculate gradients
-
Update parameters iteratively
-
Compute MSE and R²
-
Plot regression line
💻 Program
Output
-
Learned parameters:
-
(intercept)
-
(slope)
-
-
Mean Squared Error (MSE)
-
R² score
-
Regression line plot
Result
The regression model was successfully implemented using Gradient Descent, and model parameters were learned iteratively.
-
Gradient Descent finds optimal parameters iteratively
-
Results are similar to analytical method
-
Learning rate affects convergence speed and accuracy
Comments
Post a Comment