Seminar: Neural Networks for Language Applications

58316307
3-0
Algoritmit ja koneoppiminen
Syventävät opinnot
Vuosi Lukukausi Päivämäärä Periodi Kieli Vastuuhenkilö
2016 syksy 07.11-12.12. 2-2 Englanti Roman Yangarber

Luennot

Aika Huone Luennoija Päivämäärä
Ma 12-14 C220 Roman Yangarber 07.11.2016-12.12.2016

Yleistä

In this seminar we investigate recent literature on neural networks and deep learning applied to the analysis of language. Interest in these application has exploded, particularly over the last 5 years, since these techniques have demonstrated results that surpass many previously used approaches on important problems in language analysis. This followed earlier successes in other areas of application, such as vision and image processing. In semantics, for example, researchers have tried to model the "meaning" of linguistic objects -- meaning of words, short phrases, sentences, or entire documents. Semantics is important in many tasks in Natural Language Processing (NLP), since it allows the computer to model understanding about the content of a piece of text. Modeling meaning comes down to finding effective representations for the objects. For example, when considering the meaning of words, we may wish to find words that have similar meaning, or related meaning, or "opposite" meaning, etc. The same question can be asked about higher-order objects, e.g., whether two given sentences have "the same" or similar meaning. For the meaning of a document, we can ask whether the document describes some particular kind of event (e.g., a bankruptcy, a job announcement, etc.), whether a document about some particular company is describing the company in positive or negative terms, etc. Many problems in NLP can be viewed in terms of semantic representation. Whether we can find a representation that is appropriate for the given task often determines in big part the level of success we can achieve on the task. Neural networks provide representations that are flexible -- they are useful for a variety of tasks -- sometimes for a surprisingly broad variety of tasks. So a neural network trained for one task is often found to be useful for totally different, seemingly unrelated tasks. Intuitively this means that we may be getting close to a representation of language that is in some sense universal, or "true". On top of these networks we can build many interesting applications, such as document classification, sentiment analysis, and many others. In the seminar we will study research papers about recent applications of neural nets and deep learning to a range of problems in language. We will have several invited guest speakers from outside the class, presenting their own research.

Prerequisites

- Fundamental understanding of machine learning;
- Fundamentals of NLP, or agreement with instructor -- in case the chosen topic can be approach without in-depth knowledge of NLP.
 

Kurssin suorittaminen


Each participant should prepare to do the following:
- present two papers on her/his choice of topic to the audience; the two papers may cover the same topic or two separate
  topics 
- answer questions from the audience,
- attend presentations by other members,
- participate in presentations of other members by reading their papers and asking questions from the presenter.

The grade is based on the presentations (60%), active participation in the presentations of others (30), and
attendance (10%).

Kirjallisuus ja materiaali


Suggested readings/paper selection will be posted on the Course Wiki.