Better learning algorithms for neural networks

Event type: 
Guest lecture
Event time: 
13.06.2011 - 14:15 - 15:00
Lecturer : 
Geoffrey Hinton
Place: 
D123
Description: 

Geoffrey Hinton, one of the most influential figures in machine learning research, will give a talk on Monday 13th June at 14:15, in lecture hall D123 of Exactum, Kumpula.

Title: Better learning algorithms for neural networks

Abstract: Neural networks that contain many layers of non-linear processing units are extremely powerful computational devices, but they are also very difficult to train. In the 1980's there was a lot of excitement about a new way of training them that involved back-propagating error derivatives through the layers, but this learning algorithm never worked very well for deep networks that have many layers between the input and the output. I will describe two major theoretical developments that make back-propagation work much better.  One of these developments allows backpropagation to beat the current state of the art for recognizing objects and phonemes. The other development allows a recurrent neural network to learn a lot about the syntax and semantics of English just by trying to predict the next character in wikipedia. After learning, the recurrent neural network can generate novel and plausible pieces of text.

Home page: http://www.cs.toronto.edu/~hinton/

Welcome!

26.05.2011 - 09:37 Webmaster
25.05.2011 - 15:28 Aapo Hyvärinen