[an error occurred while processing this directive] [an error occurred while processing this directive] [an error occurred while processing this directive] [an error occurred while processing this directive] [an error occurred while processing this directive] HUT - CIS /Opinnot/T-122.101/s2002/index.shtml [an error occurred while processing this directive]
         [an error occurred while processing this directive] Index of /style/plain

Index of /style/plain

[ICO]NameLast modifiedSizeDescription

[PARENTDIR]Parent Directory  -  
[IMG]blue20.gif1999-11-03 19:09 37  
[IMG]blue120.gif1999-11-03 19:08 42  
[TXT]cis_plain.css2000-04-20 12:16 3.1K 
[   ]cis_plain.css.old2000-01-25 10:42 2.3K 
[TXT]cis_plain_footer.shtml2000-09-08 16:08 322  
[TXT]cis_plain_header.shtml2001-08-03 10:00 1.3K 
[   ]cis_plain_header.shtml.old2001-08-03 09:46 1.2K 
[TXT]scientific.shtml2000-03-08 14:05 4.8K 
[TXT]template.html1999-12-03 14:11 332  
[TXT]template.shtml1999-11-12 12:58 321  
[TXT]template.shtml.krista1999-12-03 14:11 267  
[TXT]template.txt1999-11-12 12:58 321  

Apache/2.4 Server at www.cis.hut.fi Port 80


Courses in previous years: [ 2000 | 2001]

Tik-122.101 Special Course in Information Technology

Graphical Models

Lecturer:Prof. (pro tem) Jaakko Hollmén, Prof. Heikki Mannila
Course Assistant:M.Sc. (Tech.) Salla Ruosaari
Semester: autumn 2002
Credit points: 4 cr
Place: lecture hall T4 in the computer science building
Time: Wednesday, 14:15 - 16:00, beginning September 18th
Language: English (or Finnish)
Course book: Michael I. Jordan, Terrence J. Sejnowski (eds.): Graphical Models - Foundations of Neural Computation, MIT Press 2001, ISBN: 0-262-60042-0. The book contains collected articles from the journal Neural Computation
Homepage:http://www.cis.hut.fi/Opinnot/T-122.101/

Graphical Models - course description

Graphical models are probabilistic models, which have origins in many different research communities, such as artificial intelligence, statistics and neural networks. The general framework of graphical models provides a mathematical formalism, that helps in understanding similarities and differences between various learning architectures and algorithms. Typically, algorithms in a particular network architecture can be derived from the inference and learning machinery of general graphical models. On such example is the junction tree algorithm, which can be used to derive inference rules for the Gaussian mixture models and hidden Markov models, for instance. The course will cover topics around Bayesian networks, Boltzmann machines, mean-field and variational approximations to learning, latent variable models and propagation of probabilities in networks with loops. The material is based on a recent textbook (see above), which is a collection of original contributions of leading researchers in the field.

The course is suitable for advanced graduate students and students with a specific research focus on graphical models. The course has great thematic overlap with the course Inference and Learning in Bayesian Networks organized last year (in Finnish), so think carefully before enrolling. Attendance will be limited to 15-20 students.

To pass the course, students are expected to give an oral presentation on one of the given topics (chapters) and hand out a summary to others. In addition to active seminar participation, there will be exercises and a programming exercise. Exercises must be returned by January, 31st, 2003.

The course information is presently missing from Topi. You can enroll by by sending e-mail to the course assistant and by attending the first seminar on September 18th.

Timetable

DateSubjectSpeakerMaterial
18.9.IntroductionJaakko Hollmén
25.9.1 Probabilistic Independece Networks for Hidden Markov Probability ModelsSalla Ruosaari
2.10.NO SEMINAR
9.10.2 Learning and Relearning in Bolzmann Machines (+ 3 Learning in Boltzmann Trees) Jussi Pakkanen 
4 Deterministic Boltzmann Learning Performs Steepest Descent in Weight-SpaceJussi Ahola 
16.10.8 Variational Learning in Nonlinear Gaussian Belief NetworksMerja Oja
23.10.11Hierarchical Mixtures of Experts and the EM Algorithm Jaakko Hollmén 
5 Attractor Dynamics in Feedforward Neural Networks (+ 6 Efficient Learning in Boltzmann Machines Using Linear Response TheoryEerika Savia 
30.10.12 Hidden Neural NetworksOlli-Pekka Rinta-Koski 
13 Variational Learning for Switching State-Space ModelsAlexander Ilin 
6.11.14 Nonlinear Time-Series Prediction with Missing and Noisy DataKarthikesh Raju 
10 Independent Factor AnalysisElla Bingham

Additional material

Zoubin Ghahramani and Sam Roweis: Probabilistic Models for Unsupervised Learning, tutorial at 1999 Neural Information Processing Systems (NIPS'99).

Max Welling and Geoffrey E. Hinton. A New Learning Algorithm for Mean Feld Bolrzmann Machines. In Proceedings of the 12th International Conference on Artificial Neural Networks, pp.351-357, 2002.

Cecil Huang and Adnan Darwiche. Inference in Belief Networks: A Procedural Guide. International Journal of Approximate Reasoning, 15 no 3, pp 225-263, 1996.

Practical assignment

The practical assignment consists of two parts. In the first part, there are (simple) questions related to the topics covered during the course ps, pdf. In the second part you are asked to construct a program of your own ps, pdf. The exercises are to be completed before the 31st of January 2003. They should be handed in to the course assistant (room C311 in the computer science building).

More information

Additional information about the course may be asked from Jaakko Hollmén, and Salla Ruosaari.



http://www.cis.hut.fi/Opinnot/T-122.101/s2002/index.shtml
webmaster@www.cis.hut.fi
Wednesday, 06-Nov-2002 13:21:44 EET