Explainability and Debiasing by Design in Recommender Systems
Speaker: Olfa Nasraoui – Louisville, KY, United StatesTopic(s): Artificial Intelligence, Machine Learning, Computer Vision, Natural language processing
Abstract
At its core, AI is enabled by advanced Machine Learning (ML) models that are now being used increasingly to enable decision making in many sectors, ranging from e-commerce to health, education, justice, and criminal investigation. Hence, these algorithmic models are starting to directly interact with and affect the daily decisions of more and more human beings. In particular many models are black box models that make predictions without any justification to the user. Without any mechanism to allow humans to understand and question the reasons behind them, Black Box predictions lack justifiability and transparency. In addition, they cannot be scrutinized for possible mistakes and biases. Therefore, designing explainable machine learning models that facilitate conveying the reasoning behind their predictions, is of great importance. Yet, one main challenge in designing ML models is mitigating the trade-off between an explainable technique with moderate prediction accuracy and a more accurate technique with no explainable predictions. This talk will focus on recommender systems, a special family of Machine Learning models that is remarkably interactive with humans, and will present recent research, at the Knowledge Discovery & Web Mining Lab, in building explainability and then debiasing into a selection of state of the art Black Box recommender systems.
First we make the case for explainability by design as an attempt to fulfill the user’s need and definition of an explanation for a particular domain and context. While designing explanations and explainability, we also raise the need for fair explanations since an explanation can itself be prone to bias and unfairness. Some motivations for explainability by design include (1) convenience of human design of the explainability and fairness criteria to satisfy one’s conceived notion of what is explainable, even those that may be subjective and complex to formulate, (2) ability to formulate the machine learning problem in such a way to satisfy diverse desiderata that include the traditional learning task and the newly designed explainability and fairness criteria, (3) ability to verify that a machine learned model satisfies criteria that have been designed, and (4) ability to embed Explainability by Design into various machine learning mechanisms due to the flexibility of the approach, for instance via modification or regularization of the loss function.
About this Lecture
Number of Slides: 50Duration: 50 minutes
Languages Available: English
Last Updated:
Request this Lecture
To request this particular lecture, please complete this online form.
Request a Tour
To request a tour with this speaker, please complete this online form.
All requests will be sent to ACM headquarters for review.