Adversarial Machine Learning and Vehicular Networks: Strategies for Attack and Robust Defense
Speaker: Junaid Qadir – Doha, QatarTopic(s): Artificial Intelligence, Machine Learning, Computer Vision, Natural language processing
Abstract
Machine learning (ML) has seen a lot of recent success in a wide variety of applications and industries. But despite their great success, researchers have shown that ML algorithms are easy to fool and susceptible to well-known security attacks. In particular, researchers have shown that many modern algorithms (particularly those based on deep neural networks or DNNs) are susceptible to adversarial attacks (such as a targeted misclassification attack on a self-driving car that aims to misclassify traffic signs). The increased importance of ML and AI, and the broad uptake and incorporation of these technologies in modern autonomous vehicles and vehicular networking places a premium on building robust and secure AI and ML algorithms. Our experience with the Internet has shown that it is very difficult to retroactively embed security in systems that are not designed with security in the first place. Although ML vulnerabilities in domains such as vision, image, audio are now well-known, little attention has focused on adversarial attacks on vehicular networking ML models. For the practical success of vehicular networking, it is extremely important that the underlying technology has to be robust to all kinds of potential problems---be they accidental, intentional, or adversarial.About this Lecture
Number of Slides: 100Duration: 210 minutes
Languages Available: English
Last Updated:
Request this Lecture
To request this particular lecture, please complete this online form.
Request a Tour
To request a tour with this speaker, please complete this online form.
All requests will be sent to ACM headquarters for review.