Logistic regression is used to classify things into positive case or negative case. It is a special case of linear regression which outputs value between 0 and 1, which denote the probability or the likelihood of the given sample being positive. For e.g. what is the probability of the given email being a spam? We can then predict a positive case if the hypothesis outputs a value above a certain threshold, which normally is 0.
Linear regression is a fancy name to a simple technique. This algorithm (model) predicts the most likely result (y) given the input features (x). To be able to predict, the model needs (lots of) historical data with the correct output (thus it is supervised learning). This model finds weights (θ) to be assigned to each feature, such that sum of the weighted features is closest to the given answer y.
I recently finished the machine learning class offered online by Stanford. It was a great experience. Since I would not be using ML any time soon, I plan to make a few blog posts to capture my learning while they are still current. This should help me recollect the concepts on a later date. If someone finds these notes useful, that is an added benefit. Machine learning Arthur Samuel (1959): Field of study that gives computers the ability to learn without being explicitly programmed.