This is a brief summary of ML course provided by Andrew Ng and Stanford in Coursera.
You can find the lecture video and additional materials in
https://www.coursera.org/learn/machine-learning/home/welcome
There is no one fixed definition for machine learning. Some examples as below:
Arthur Samuel (1959) - Field of study that gives computers the ability to learn without being explictly programmed.
Tom Mitchell (1998) - Well-posed Learning Problem: A computer program is said to learn from experience E with respect to some task T and some performace measure P, if its performance on T, as measured by P, improves with experienc E.
Quiz: Suppose your email program watches which emails you do or do not mark as spam, and based on that learns how to better filter spam. What is the task T in this setting?
1. Classifying emails as spam or not spam.
2. Watching you label emails as spam or not spam.
3. The number of emails correctly classified as spam/ not spam.
4. None of the above.
Answer: 1.
Short explanation: 2 is E (experience) and 3 is P (performance)
Another example for Mitchell's definition,
Example: playing checkers.
E = the experience of playing many games of checkers
T = the task of playing checkers.
P = the probability that the program will win the next game.
Machine Learning Algorithms:
- Supervised Learning
- Unsupervised Learning
Others: Reinforcement Learning, Recommender systems
Also talk about: Practical advice for applying learning algorithms.
Teaching about learning algroithms is like giving a set of tools.
Equally, or more important than giving you the tools as they teach you how to apply these tools.
Imagine that someone is teaching you how to be a carpenter,
knowing what a hammer, a screwdriver, and a saw is not sufficient.
--> to learn how to use these tools properly is more important