Hence, we get a random line in the beginning. To get started with, m and b are randomly initialized. The line will obviously be of the form y = mx + b, and we want to find the best values for m and b. Now we want to fit a line through these data points. the data points given to us are of the form (x, y). For the sake of simplicity, we’ll assume that the output( y) depends on just one input variable( x) i.e. The challenge is to find the best fit for the line. Linear regression is a technique, where given some data points, we try to fit a line through those points and then make predictions by extrapolating that line. To understand gradient descent, let’s conisder linear regression. The previous two posts can be found here: DL01: Neural Networks Theory DL02: Writing a Neural Network from Scratch (Code)
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |