Instructor : Liming Cai
Office: 544 Boyd
Phone
: 2-6081
Email : cai@cs.uga.edu
Lecture hours: 12:20 - 1:10M and 12:30 - 1:45 TR
Classrooms: 306 Boyd (M) and Forest Resources-1 0303 (TR)
Office Hours: 1:15-2:15 (Mondays), 9:45-10:45 (Tuesdays) or by appointment
Objectives
The rising tide of artificial intelligence, especially rapid developments in machine learning underline great potentials of such computational methods. They also highlight the importance of
statistical learning from available data for effective modeling of real world problems, where traditional solutions have been predominately deterministic and often not effective. To prepare students
for solving challenging problems where effective solutions may require development of probabilistic
models and methods, this course offers an opportunity to study fundamental theories and algorithms
in probabilistic networks, including model construction from structural and non-structural data and
statistical inference and prediction with such models. It is designed to complement a machine learning
course that is focused on techniques/software or an algorithm course whose targets are deterministic
models.
Prerequisites
Completion of CSCI 4470/6470 (Algorithms) with a decent grade is required, or approval
of the instructor.
Contents
This course will expose the students to a number of basic topics in probabilistic networks, learning
and inference, spanning the following areas:
- Fundamentals in probability theory, statistics, information theory, and optimization algorithms;
- Random walk, Markov chain, and randomized algorithms;
- Markov trees, networks, Bayesian networks, causality networks;
- Model topology learning, parameter estimation, maximum likelihood, posterior probability, and EM algorithm;
- Inference with probabilistic models and Markov chain Monte Carlo (MCMC) algorithm.
Format and requirements
The teaching will be a mix of 2/3 lectures by the instructor and 1/3 presentations by students on
their literature-readings or research projects. No textbook will be used. Grading will be based on
project reports, presentations, and participation in classroom discussions.
Reference books
- D. Koller and N. Friedman, Probabilistic Graphical Models: Principles and Techniques, MIT Press, 2009
- J. Pearl, Causality: Models, Reasoning, and Inference, Cambridge University Press, 2009
- R. Durbin, S. Eddy, A. Krogh, and G. Mitchinson, Biological sequence analysis. Probabilistic models of proteins and nucleic acids,
Cambridge University Press, 1998.
Some papers of interest
- Minsky, Music, Mind, and Meaning (1982)
- Balduzzi, Semantics, Representations and Grammars for Deep Learning (2015)
- Choi et al, Learning Latent Tree Graphical Models
- Mohammadi et al, Generalized Permutohedra from Probabilistic Graphical Models (2016)
- Mukherjee and Basu, Lower bounds over Boolean inputs for deep neural networks with ReLU gates (2017)
- Hamanaka et al, DeepGTTM-II: Automatic Generation of Metrical Structure based on Deep Learning Technique, (2016)
- Wainwright and Jordan, Graphical Models, Exponential Families, and Variational Inference, (2008)
- Arora et al, A Practical Algorithm for Topic Modeling with Provable Guarantees, (2012)
- Galas et al, Expansion of the Kullback-Leibler Divergence, and a new class of information metrics, (2017)
- Liang et al, Efficient Learning of Optimal Markov Network Topology with k-Tree Modeling, (2018)
- Liang et al, Stochastic k-Tree Grammar and Its Application in Biomolecular Structure Modeling, (2014)
- Collin, Probabilistic Context-Free Grammars (PCFGs), (2011)
Music as languages
Research Presentations and Names
- Markov chain stationary properties [Saar]
- probabilistic context-free grammar [Hale]
- sampling algorithms in general [Yifan] [Weifeng]
- Markov chain Monte Carlo algorithm [Xinchen] [Yifan]
- prior and Dirichlet distribution [Xiaodong]
- Bayesian network and properties [Logan] [Jonathan]
- Markov networks (random fields) and properties [Vinay]
- Bayesian learning methods in general [Jonathan]
- neural networks and advancements [An] [Zhiwei]
- learning representation [Bahaa]
- neural networks vs probabilistic networks [Nihal]
- variation inference
- Bayesian networks and casual networks [Ailing] [Yang]
- extension to Kullback-Liebler divergence
Academic Dishonesty:
It is expected that the work you submit is your own. Plagiarism and other
forms of academic dishonesty will be handled within the guidelines of the
Student Handbook. The usual penalty for academic dishonesty is loss of credit
for the assignment in question; however, stronger measures may be taken when
conditions warrant.