Supervised Learning with Support Vectors

# Introduction In this tutorial, we will learn about Support Vector Machines (SVM), which are a set of supervised learning methods used for classification, regression, and outlier detection. SVMs are effective in high-dimensional spaces and can still perform well when the number of dimensions is greater than the number of samples. The advantages of SVMs include their effectiveness in high-dimensional spaces, memory efficiency, and versatility in terms of different kernel functions. However, it is important to avoid overfitting and choose the right kernel and regularization term for the given problem. In this tutorial, we will cover the following topics: 1. Classification with SVM 2. Multi-class classification 3. Scores and probabilities 4. Unbalanced problems 5. Regression with SVM 6. Density estimation and novelty detection ## VM Tips After the VM startup is done, click the top left corner to switch to the **Notebook** tab to access Jupyter Notebook for practice. Sometimes, you may need to wait a few seconds for Jupyter Notebook to finish loading. The validation of operations cannot be automated because of limitations in Jupyter Notebook. If you face issues during learning, feel free to ask Labby. Provide feedback after the session, and we will promptly resolve the problem for you.

|
60 : 00

Click the virtual machine below to start practicing