Over the last decade, probabilistic topic models have emerged as an extremely powerful and popular tool for analyzing large collections of unstructured data. While originally proposed for textual data, topic models have since been applied for various other types of data, such as images, videos, music, social networks and biological data. In this tutorial, I will discuss both the modeling and algorithmic aspects of topic models. I will review the fundamentals of probabilistic generative models, and explain how they can be applied for textual data, starting from simple unigram models to the Latent Dirichlet Allocation model. Then I will look at the problem of learning and inference using topic models, explain why exact inference is intractable for such models, review the principle of inference using sampling, and discuss sampling-based strategies for inference in topic models. Finally, I will discuss some short-comings of LDA, and briefly touch upon more advanced topic models, such as syntactic, correlated, dynamic, interpretable and supervised topic models. 

About this Lecture

Number of Slides:  60
Duration:  60 - 90 minutes
Languages Available:  English
Last Updated: 

Request this Lecture

To request this particular lecture, please complete this online form.

Request a Tour

To request a tour with this speaker, please complete this online form.

All requests will be sent to ACM headquarters for review.