This is a brief summary of ML course provided by Andrew Ng and Stanford in Coursera.
You can find the lecture video and additional materials in
https://www.coursera.org/learn/machine-learning/home/welcome
Gradient Descent Algorithm:
repeat until convergence:
$\theta_j := \theta_j - \alpha \frac{\partial}{\partial\theta_j} J(\theta_0, \theta_1)$
where j=0,1 represents the feature index number.
Linear Regression Model:
$h_{\theta} (x) = \theta_0 + \theta_1 x$
$J(\theta) = \frac{1}{2m} \sum_{i=1}^m (h_{\theta}(x^{(i)} - y^{(i)})^2$
Apply Gradient Descent to minimize the squared error cost function. In order to apply gradient descent, we should find the key term, the derviative term.
$\frac{\partial}{\partial\theta_j} J(\theta_0, \theta_1) = \frac{\partial}{\partial\theta_j} \frac{1}{2m} \sum_{i=1}^{m} (h_{\theta}(x^{(i)}) - y^{(i)})^2$
= $\frac{\partial}{\partial\theta_j} \frac{1}{2m} \sum_{i=1}^{m} (\theta_0 + \theta_1 x^{(i)} - y^{(i)})^2$
$\theta_0 -> j = 0 : \frac{\partial}{\partial\theta_0} J(\theta_0, \theta_1) = \frac{1}{m} \sum_{i=1}^{m} (h_{\theta}(x^{(i)}) - y^{(i)})$
$\theta_1 -> j = 1 : \frac{\partial}{\partial\theta_1} J(\theta_0, \theta_1) = \frac{1}{m} \sum_{i=1}^{m} (h_{\theta}(x^{(i)} - y^{(i)}) x^{(i)}$
Gradient Descent Algorithm
repeat until convergence {
$\theta_0 := \theta_0 - \alpha \frac{1}{m} \sum_{i=1}^{m} (h_theta(x^{(i)}) - y^{(i)})$
$\theta_1 := \theta_1 - \alpha \frac{1}{m} \sum_{i=1}^{m} (h_theta(x^{(i)}) - y^{(i)}) x^{(i)}$
}, update $\theta_0 and \theta_1$ simultaneously
Unlike the complicated 3d graphs in the previous lecture, it turns out that the cost function for linear regression always returns the a bow-shaped, convex fucntion. This does not have a local optima, except the global one.
"Batch" Gradient Descent
"Batch": Each step of gradient descent uses all the training examples.
Quiz: Which of the following are true statements? Select all that apply.
1. To make gradient descent converge, we must slowly decrease $\alpha$ over time.
2. Gradient descent is guaranteed to find the global min for any function $J(\theta_0, \theta_1)$.
3. Gradient descent can converge even if $\alpha$ is kept fixed. (But $\alpha$ cannot be too large, or else it may fail to converge.)
4. For the specific choice of cost function $J(\theta_0, \theta_1)$ used in linear regression, there are no local optima (other than the global optimum).
Answer: 3, 4
Gradient descent will scale better to larger data sets than that normal rwequation method from linear algebra.
Lecturer's Note
When specifically applied to the case of linear regression, a new form of the gradient descent equation can be derived. We can substitute our actual cost function and our actual hypothesis function and modify the equation to :
repeat until convergence {
$\theta_0 := \theta_0 - \alpha \frac{1}{m} \sum_{i=1}^{m} (h_theta(x^{(i)}) - y^{(i)})$
$\theta_1 := \theta_1 - \alpha \frac{1}{m} \sum_{i=1}^{m} (h_theta(x^{(i)}) - y^{(i)}) x^{(i)}$
}, update $\theta_0 and \theta_1$ simultaneously
where m is the size of the training set, $\theta_0$ a constant that will be changing simultaneously with $\theta_1$ and $x_{i}, y_{i}$ are values of the given training set (data).
Note that we have separated out the two cases for $\theta_j$ into separate equations for $\theta_0$ and $\theta_1$; and that for $\theta_1$ we are multiplying $x_{i}$ at the end due to the derivative. The following is a derivation of $\frac{\partial}{\partial\theta_j} J(\theta) for a single example :
The point of all this is that if we start with a guess for our hypothesis and then repeatedly apply these gradient descent equations, our hypothesis will become more and more accurate.
So, this is simply gradient descent on the original cost function J. This method looks at every example in the entire training set on every step, and is called batch gradient descent. Note that, while gradient descent can be susceptible to local minima in general, the optimization problem we have posed here for linear regression has only one global, and no other local, optima; thus gradient descent always converges (assuming the learning rate α is not too large) to the global minimum. Indeed, J is a convex quadratic function. Here is an example of gradient descent as it is run to minimize a quadratic function.
The ellipses shown above are the contours of a quadratic function. Also shown is the trajectory taken by gradient descent, which was initialized at (48,30). The x’s in the figure (joined by straight lines) mark the successive values of θ that gradient descent went through as it converged to its minimum.