Abstract

Focusing on non-experts of the field, this talk introduces how it is possible to deceive state-of-the-art AI systems. In particular, the talk focuses on visual models that are able to look at images and recognize them with accuracies comparable to humans. The talk is intended to promote awareness about avoiding blindly relying on AI systems. It shows how apparently clean inputs to AI systems can be manipulated to achieve a desired outcome by an attacker. The discussed results are based on findings of researchers around the globe, and it also covers the techniques developed by the speaker at the University of Western Australia. Particular attention is given to the fact that audience are non-experts of AI.

About this Lecture

Number of Slides:  23
Duration:  20 minutes
Languages Available:  English
Last Updated: 

Request this Lecture

To request this particular lecture, please complete this online form.

Request a Tour

To request a tour with this speaker, please complete this online form.

All requests will be sent to ACM headquarters for review.