Threat of Adversarial Attacks on Deep Learning in Computer VisionSpeaker: Ajmal Saeed Mian – Crawley, WN, Australia
Topic(s): Artificial Intelligence, Machine Learning, Computer Vision, Natural language processing
AbstractDeep learning is at the heart of the current rise of artificial intelligence. However, deep models are vulnerable to adversarial attacks in the form of subtle perturbations to inputs that make the model perform incorrect decisions, often with high confidence. In this talk, I will give a brief introduction to popular methods for generating adversarial perturbations. I will discuss early defence mechanisms, including our work, for defence against such attacks. Following this, I will explain our Label Universal Targeted Attack (LUTA) that makes a deep model predict a specific target label for any sample of only a given source class with high probability. This is achieved by stochastically maximizing the log-probability of the target label for only the source class while suppressing leakage to the non-source classes. I will demonstrate the use of LUTA as a tool for deep model autopsy. LUTA results in interesting perturbation patterns revealing the inner work-ing of the deep models and the training process itself exposes the feature embedding space. Fi-nally, I will present our method for fooling human skeleton joint based action recognition models. Besides being the first attack of its kind, an interesting aspect of our method is that the attack transfers well to the physical world. The above methods are published in CVPR 2018, CVPR 2020, PAMI 2021 and TNNLS 2020.
About this LectureNumber of Slides: 58
Duration: 60 minutes
Languages Available: English
Request this Lecture
To request this particular lecture, please complete this online form.
Request a Tour
To request a tour with this speaker, please complete this online form.
All requests will be sent to ACM headquarters for review.