Skip to main content

Linear Regression

                                            Linear Regression


As you can see from the image above, a line of best fit is drawn on a scatter plot. The line of best fit is essentially the Linear Regression model and the blue data points is the training data.  

What we need to do in order to implement the linear regression model to the data points/training data is to calculate the line of best fit.

Linear Regression Model:
output = weight * input + b   (where "weight" and "input" are vectors)

When looking at how the linear regression model works, you need to have an understanding of what the terms "weight" and "bias" mean. You can think of the "weight" vector and "bias" as terms that optimize the line to be a line of best fit on the training data points (like a gradient the y-intercept in a linear function).      

Loss Function: Mean Squared Error 

In order to calculate a line of best fit, we will need to find the weight and the bias of the model. 

Before we get into an optimization algorithm to calculate the weight and the bias of our model. We will need to take a look at what a loss function is. Loss function or cost function is a function that measures the performance of your model; it's able to calculate how inaccurate your model is. 

This is a loss function called "Mean Squared Error"





This function squares the difference between the target output and the output obtained from the current model. It sums up the output for each of the data points and calculates the average loss. 

Gradient Decent:

This is the algorithm that is used to help find the optimal weight and bias of the model to the training data points.








As you can see from the image above, we can see how the cost in the y-axis changes with respect to the "weight" on the x-axis.

Gradient Decent works by finding the derivative of the cost function with respect to the current weight or bias.


EQUATION: Where theta can be replaced with the weight and the bias of the model.

Theta(new weight or bias)  = Theta(current weight or bias) - (leaning rate) x (derivative of cost function with respect to weight or bias)

This is how the gradient descent algorithm is applied to optimize the weight and the bias.

Learning Rate: 
The learning rate is a tuning hyperparameter that controls how rapidly your weight or bias gets corrected. 
If your learning rate is too big, it would be taking large steps each time you try and optimize it. As you can see from the image above. It wouldn't be very effective to get to the local minimum(point with lowest cost). On the other hand, if your learning rate is too small, it would be very time consuming as it takes small incremental steps to reach the local minimum. Therefore, it's important to make sure that you have the just right learning rate to be efficient.  
Also, using a small learning rate could cause it to get stuck in the local minimum. preventing the model from getting the optimal weight and bias to minimize the 'error' as much as possible.


Comments

Popular posts from this blog

Overfitting vs Underfitting

Overfitting vs Underfitting Hello! In this post, we will be exploring the concepts of Overfitting and Underfitting. Overfitting: Overfitting is a modelling error that occurs when the model fits the training data too well.  As you can see above, the overfitted model is fit almost perfectly to the training data. The overall cost of the model to the training data be near 0, however, the accuracy of this model would be poor when used on testing data. Underfitting: Underfitting is when the model fits to the training data too simply; when the model isn't complex enough to adequately understand the trend/pattern of the training data.   As you can see from the image above, the model is just a linear line and does not fit to the training data very well; the model does not accurately understand the trend of the data and is fit too simply.

Lasso and Ridge Regression

Ridge and Lasso Regression Hello! Today we will be exploring two different regression algorithms called Ridge Regression and Lasso Regression which are similar to how Linear Regression works.  Before we get into how Lasso and Ridge regression models work, you need to understand what is meant by the term 'overfitting'. Overfitting: Overfitting refers to training the model too well by the training data.  For example, the indicator that your model is overfitting could be when the total cost of your model by the training data is zero and the cost for your test data is huge. As a result, this could negatively affect your model as it's unable to predict accurately for your testing data. I will be exploring more about the concept of overfitting and underfitting in future posts Ridge Regression: Like I said at the beginning of this post, the Lasso regression model is very similar to how the Linear regression model works. As some of you may already know...