Ridge and Lasso Regression
Hello! Today we will be exploring two different regression algorithms called Ridge Regression and Lasso Regression which are similar to how Linear Regression works.
Before we get into how Lasso and Ridge regression models work, you need to understand what is meant by the term 'overfitting'.
Overfitting:
Overfitting refers to training the model too well by the training data.
For example, the indicator that your model is overfitting could be when the total cost of your model by the training data is zero and the cost for your test data is huge. As a result, this could negatively affect your model as it's unable to predict accurately for your testing data.
I will be exploring more about the concept of overfitting and underfitting in future posts
Ridge Regression:
Like I said at the beginning of this post, the Lasso regression model is very similar to how the Linear regression model works. As some of you may already know, the cost function that is used in Linear regression is Mean Squared Error(MSE). However, the cost function for Ridge regression is a little different.
RSS(Residual Sum of Squares):
Ridge Regression Cost:
RSS(Residual Sum of Squares):
Ridge Regression Cost:
In Ridge Regression, we add lambda times (sum of the weights squared) to the RSS cost of the model. This is what prevents the model from overfitting. This cost function prevents the total cost from being zero; when the RSS cost equals zero it indicates that the model has trained perfectly by the training data. However, by adding lambda times (sum of the weights squared) to the RSS, it prevents the model from being perfectly fit to the training data. Lambda times (sum of weights squared) will be a very small value that adds to the RSS cost, this prevents the model from thinking that it's perfectly fit and causes it to shift and modify the weights.
Lambda: Value greater than 0; the most optimal value can be found using cross-validation.
Comments
Post a Comment