What is Feature Scaling?
Feature Scaling is an important pre-processing step for some machine learning algorithms.
Imagine you have three friends of whom you know the individual weight and height.
You would like to deduce Chris’ T-shirt size from Cameron’s and Sarah’s by looking at the height and weight.
||Height in m
||Weight in kg
One way You could determine the shirt size is to just add up the weight and the height of each friend. You would get: Continue reading “Lesson 10: Feature Scaling”
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier(min_samples_split=40)
One part of my bucket list for 2018 is finishing the Udacity Course UD120: Intro to Machine Learning.
the host of this course are Sebastian Thrun, ex-google-X and founder of Udacity and Katie Malone, creator of the Linear digressions podcast.
The course consists of 17 lessons. Every lesson has a couple of hours of video and lots and lots of quizzes in it.
Lesson 2 of the Udacity Course UD120 – Intro to Machine Learning deals with Naive Bayes classification. Continue reading “Lesson 2: Naive Bayes”
The term Support Vector Machines or SVM is a bit misleading. It is just a name for a very clever algorithm invented by two Russians. in the 1960s. SVMs are used for classification and regression.
SVM do that by finding a hyperplane between two classes of data which separates both classes best. Continue reading “Lesson 3: Support Vector Machines”