How People Perceive AI - Trust and Explanation

Speaker:  Pearl Pu – Preverenges, Switzerland
Topic(s):  Artificial Intelligence, Machine Learning, Computer Vision, Natural language processing

Abstract

Trust is believed to be a central antecedent of any relationship: personal, familial, business, and organizational. Trust is difficult to build, but easy to break. As AI is becoming increasingly important in our daily lives, we need to address the question how AI is perceived and whether it can be trusted. In this talk, I will show some work my students and I have done towards explainable AI or XAI. In using information visualization techniques, our goal was to make the inner logic of AI more accessible and scrutable. The idea is that if something is scrutable, then it is more accountable, more transparent, and thus more trust worthy. In developing these visualization systems, we have identified a set of general design guidelines that can be applied to other systems. We will show and discuss what makes a good explanation.  

About this Lecture

Number of Slides:  45
Duration:  60 minutes
Languages Available:  English
Last Updated: 

Request this Lecture

To request this particular lecture, please complete this online form.

Request a Tour

To request a tour with this speaker, please complete this online form.

All requests will be sent to ACM headquarters for review.