3D Deceptive Textures for Physical World Adversarial AttacksSpeaker: Ajmal Saeed Mian – Crawley, WN, Australia
Topic(s): Artificial Intelligence, Machine Learning, Computer Vision, Natural language processing
AbstractDeep learning offers state-of-the-art solutions for multiple computer vision tasks. How-ever, deep neural models are vulnerable to slight input perturbations that can significantly change model predictions. The last few years have seen a plethora of contributions in devising attack methods that compute such perturbations or defense methods that remove those perturbations from inputs. This has created an arm race between attacks and defenses. Nevertheless, there are still under-explored avenues in this research direction. Among those is the estimation of adver-sarial textures for 3D models in an end-to-end optimization scheme that will also work in the physical world. In this talk, I will introduce such a scheme to generate adversarial textures for 3D models that are highly transferable and invariant to different camera views and lighting condi-tions. Our method makes use of neural rendering with explicit control over the model texture and background. We ensure transferability of the adversarial textures by employing an ensemble of robust and non-robust models. Our technique utilizes 3D models as a proxy to simulate closer to real-life conditions in a scenery compared to the conventional method of using 2D images for adversarial attacks. With extensive experiments we also provide insights about the requirements of sophisticated methods for adversarial textures, and about the transferability of such textures.
About this LectureNumber of Slides: 40
Duration: 40 minutes
Languages Available: English
Request this Lecture
To request this particular lecture, please complete this online form.
Request a Tour
To request a tour with this speaker, please complete this online form.
All requests will be sent to ACM headquarters for review.