Secure Privacy preserving Machine Learning
Speaker: Daniel Takabi – ATLANTA, GA, United StatesTopic(s): Security and Privacy
Abstract
Deep learning with neural networks has become a highly popular machine learning method due to recent breakthroughs in computer vision, speech recognition, and other areas. However, the deep learning algorithms require access to raw data which is often privacy sensitive. On the other hand, recent work has demonstrated that deep learning systems are fragile and can be easily deceived. For example, adversarial perturbations often invisible to human eye can be added to an image to cause a deep neural network to misclassify it; These attacks are effective across different application domains and neural network architectures. This talk will give an overview of a novel combination of homomorphic encryption techniques and machine learning that enables secure privacy-preserving deep learning. It will briefly discuss theoretical foundation, describe the proposed approach and then demonstrate its applicability with empirical results. Experimental results show that it is feasible and practical to train neural networks using encrypted data and to make encrypted predictions. The talk will also discuss recent work on accelerating the performance of running machine learning on encrypted data using combination of Fully Homomorphic Encryption (FHE), Deep Neural Networks (DNNs) and Graphics Processing Units (GPUs). A discussion will be provided on how combining homomorphic encryption techniques with deep learning algorithms could potentially help defend against powerful adversarial attacks. Finally, challenges and future directions will be described.About this Lecture
Number of Slides: 55Duration: 60 minutes
Languages Available: English
Last Updated:
Request this Lecture
To request this particular lecture, please complete this online form.
Request a Tour
To request a tour with this speaker, please complete this online form.
All requests will be sent to ACM headquarters for review.