Adversarial attacks on deep learning: fooling and beyondSpeaker: Naveed Akhtar – Perth, WN, Australia
Topic(s): Artificial Intelligence, Machine Learning, Computer Vision, Natural language processing
AbstractDeep learning is a key technology in Artificial Intelligence. It allows us to learn complex mathematical functions directly from data. Provided an input, these functions can predict outputs often with accuracies beyond human abilities. The capacity of deep learning to solve complex predictive tasks has made it the top choice in many security-critical applications, e.g., face recognition, automated surveillance, smart check-ins, face-id. However, it has also been discovered recently that this technology is susceptible to adversarial manipulation. It is possible to embed human-imperceptible signals in inputs that can completely alter model outputs. This talk focuses on adversarial attacks on deep learning and their defences. In particular, it introduces the popular first-generation attacks in the domain of computer vision and also discusses example defense technique. Finally, it gives a brief overview of potential applications of adversarial attacks in terms of explaining black-box deep learning models.
About this LectureNumber of Slides: 46
Duration: 35 minutes
Languages Available: English
Request this Lecture
To request this particular lecture, please complete this online form.
Request a Tour
To request a tour with this speaker, please complete this online form.
All requests will be sent to ACM headquarters for review.