Using Lagrange Duality, we have
Support Vector Machines
Margins
Kernel Trick
SMO Algorithm!!
Notation

g(z) = 1 when z >=0

g(z) = -1 otherwise

So basically classifier is perceptron algorithm 

Functional Margin

Large positive functional margin = confident and correct prediction 

Geometric Margin

If ||w|| = 1, we have functional margin.  

Margin of (w,b) wrt S

Where S is training set,

(the smallest of the margins on the individual training examples) 


Calculating the optimal margin classifier
A decision boundary that maximizes the geometric margin.  
- Assume training set is linearly separable.
NEED TO GO OVER
- SMO Algorithm
- Lagrange
Apply to SVMs
Simply replace inner products with a Kernel.  This allows your algorithm to work efficiently in the high dimensional feature space corresponding to K. 
Lagrange
   Login to remove ads X
Feedback | How-To