Now since we have already discussed linear regression and logistic regression algorithms in detail, it’s time to move on to Support Vector Machine (SVM).
SVM is another simple yet crucial algorithm that every machine learning expert should have in their armaments.
SVM is highly preferred by many as it produces significant accuracy with less computation power.
SVM can be used for both regression and classification tasks. But, it is widely used in classification objectives.
Done? Awesome! Let’s move on!
How does SVM work?
Let’s understand and visualize the basics of SVM using a simple example.
Let’s imagine we have two tags: red and blue, and our data has two features: x and y. We want a classifier that, given a pair of (x,y) coordinates, outputs if it’s either red or blue. We plot our already labeled training data on a plane:
SVM takes these data points and outputs the hyperplane; please note that in two dimensions this hyperplane will just be a line that best separates the tags. This line is the ‘decision boundary‘: anything that falls to one side of it we will classify as blue, and anything that falls to the other as red.
However, what exactly is the best hyperplane? For SVM, it’s the one that maximizes the margins from both tags. In other words: the hyperplane (remember it’s a line in this case) whose distance to the nearest element of each tag is the largest.
Suggested video to learn exactly how this optimal hyperplane is found : https://www.youtube.com/watch?v=1NxnPkZM9bc
A simple real life example of SVM : Face Detection
SVM can be used to classify the parts of the image as face and non-face. We employ training data of (n x n) pixels with a two-classes; face (+1) and non-face (-1).
Once the classes are defined, the algorithm extracts features from each pixel as face or non-face. After which, the algorithm creates a square boundary around faces based on the brightness of the pixels. Hence, SVM classifies each image using the same process.
Jag hoppas att du gillade det! (I hope you liked it!)