This is a brief summary of ML course provided by Andrew Ng and Stanford in Coursera.
You can find the lecture video and additional materials in
https://www.coursera.org/learn/machine-learning/home/welcome
Recap
we want to fit a straight line to our data.
Hypothesis: $h_{\theta} (x) = \theta_0 + \theta_1 x$
Parameters: $\theta_0, \theta_1 $
With different choices of the params, we end up with different straight line.
Cost Function: $J(\theta_0, \theta_1) = \frac{1}{2m} \sum_{i=1}^m (h_{\theta}(x^(i)) - y^(i))^2 $
Goal: minimize $\theta_0, \theta_1 J(\theta_0, \theta_1)$
For each value of $\theta_1 corresponds to a different hypothesis, or to a different straight line fit.
To minimize $\theta_1 J(\theta_1)$, which is the objective, is when $\theta_1 = 1$.
By setting theta one equals one, for this particular training set, we actually end up fitting it perfectly.
Lecturer's Note
If we try to think of it in visual terms, our training data set is scattered on the x-y plane. We are trying to make a straight line (defined by $h_{\theta} (x)$ which passes through these scattered data points.
Our objective is to get the best possible line. The best possible line will be such so that the average squared vertical distances of the scattered points from the line will be the least. Ideally, the line should pass through all the points of our training data set. In such a case, the value of $J( \theta_0, \theta_1) $ will be 0. The following example shows the ideal situation where we have a cost function of 0.
When $\theta_1 = 1$, we get a slope of 1 which goes through every single data point in our model. Conversely, when $\theta_1 = 0.5$, we see the vertical distance from our fit to the data points increase. This increases our cost function to 0.58. Thus as a goal, we should try to minimize the cost function. In this case, $\theta_1 = 1 $ is our global minimum.