Multi-armed bandits in recommender systems

Speaker:  Georgia Koutrika – Athens, Greece
Topic(s):  Information Systems, Search, Information Retrieval, Database Systems, Data Mining, Data Science

Abstract

Traditional recommender systems can provide meaningful recommendations at an individual level by leveraging users' interests as demonstrated by their past activity. However, in many web-based scenarios (e.g., filtering news articles or display of advertisements), the content universe undergoes frequent changes, and a significant number of visitors are likely to be entirely new with no historical consumption record. In such highly dynamic recommendation domains, it is essential for the recommendation method to adapt to the shifting preference patterns of the users and the evolving space of items. This is a challenge since most recommendation models are designed to change slowly or offline. Exploration-exploitation methods, a.k.a. multi-armed bandits, have been shown to be an excellent solution. In this talk, I will present several representative algorithms (context-free and contextual bandits) and I will describe how they are applied in the context of recommender systems for popularity ranking (to balance exposure of new items with old winners), model-based collaborative filtering, and dueling bandits (to efficiently compare multiple recommendation methods).

About this Lecture

Number of Slides:  60 - 100
Duration:  60 - 90 minutes
Languages Available:  English, Greek
Last Updated: 

Request this Lecture

To request this particular lecture, please complete this online form.

Request a Tour

To request a tour with this speaker, please complete this online form.

All requests will be sent to ACM headquarters for review.