Advanced Course in Machine Learning

Algorithms and machine learning
Advanced studies
More detailed coverage of machine learning methods and algorithms, presented from statistical and optimization perspective. The course goes beyond Introduction to Machine Learning in both scope and level of detail, introducing techniques such as latent variable models, nonlinear dimensionality reduction, approximative Bayesian inference, and deep learning. Prerequisites: Introduction to machine learning or similar knowledge. The course is also suitable for mathematics and statistics students interested in machine learning.


11.05.2017 16.00 B123
Year Semester Date Period Language In charge
2017 spring 14.03-05.05. 4-4 English Arto Klami


Time Room Lecturer Date
Tue 10-12 D122 Arto Klami 14.03.2017-11.04.2017
Thu 12-14 D122 Arto Klami 16.03.2017-06.04.2017
Thu 12-14 D122 Arto Klami 20.04.2017-04.05.2017
Tue 10-12 D122 Arto Klami 25.04.2017-02.05.2017

Exercise groups

Group: 1
Time Room Instructor Date Observe
Fri 12-14 B221 Aditya Jitta 17.03.2017—07.04.2017
Fri 12-14 B221 Aditya Jitta 21.04.2017—05.05.2017

Ilmoittautuminen tälle kurssille alkaa tiistaina 16.2. klo 9.00.

Registration for this course starts on Tuesday 16th of February at 9.00.


The course has a Moodle page that contains all the material (lecture slides and exercise problems). It is also used for turning in the exercise problem solutions.

The course is a natural contiuation for the Introduction to machine learning -course. It covers more topics and also goes deeper, discussing both theory and practice of machine learning.

The topics include (slight changes are to be expected):

  1. Machine learning in general; what it is about, what can it achieve, and what are the underlying fundamentals
  2. Probabilitistic perspective to machine learning, some Bayesian ifnerence
  3. Optimization and regularization
  4. Unsupervised learning
    1. Clustering, mixture models
    2. Linear latent variable models (PCA, ICA, etc)
    3. Non-linear dimensionality reduction
    4. Matrix factorization, recommender engines
  5. Supervised learning
    1. Regression and classification; basic principles
    2. Kernel methods and support vector machines
    3. Decision trees, ensembles and boosting
  6. Neural networks and deep learning


Completing the course

There are two alternative ways to complete the course:

  1. Solve sufficient proportion of the weekly exercises and attend the course exam (or a later separate exam)
  2. Solve a small research project and attend a separate exam. The project details are available in Moodle, but can be downloaded also directly from here

The exercise session is not obligatory. Instead, it is simply a session during which the course organizers will be available to help with the exercises.

Literature and material

The course will be primarily lectured based on the book "Machine learning: A probabilistic perspective" by Kevin P. Murphy, but most of the material can also be found in freely available sources. Links to alternative readinig sources are provided in the Lectures tab (and sometimes in the exercise problems).