Supervised Learning: Classification

Intermediate

During this course, we will continue to learn another important application in supervised learning - solving classification problems. In the following lessons, you will be exposed to: logistic regression, K-nearest neighbor algorithm, naive Bayes, support vector machine, perceptron and artificial neural network, decision tree and random forest, and bagging and boosting methods. The course will start with the principle of each of these methods. You are supposed to fully understand the implementat

scikit-learnMachine Learning

Introduction

In this course, you will learn how to solve classification problems using various supervised learning algorithms.

🎯 Tasks

In this course, you will learn:

  • How to implement logistic regression, K-nearest neighbor algorithm, naive Bayes, support vector machine, perceptron and artificial neural network, decision tree and random forest, and bagging and boosting methods.
  • How to understand the principles behind each of these classification algorithms.
  • How to implement and apply these algorithms to solve real-world classification problems, such as handwritten digits recognition.

🏆 Achievements

After completing this course, you will be able to:

  • Understand the strengths and weaknesses of different classification algorithms and choose the appropriate one for your problem.
  • Implement and apply these algorithms to solve classification problems in various domains.
  • Evaluate the performance of these algorithms using cross-validation techniques.

Teacher

labby

Labby

Labby is the LabEx teacher.