Trustworthy Artificial Intelligence with Federated Learning
Speaker: Irwin King – Hong Kong, Hong KongTopic(s): Artificial Intelligence, Machine Learning, Computer Vision, Natural language processing
Abstract
Artificial intelligence (AI) has quickly become an integral part of our daily lives, appearing in virtual assistants and autonomous vehicles. However, the widespread use of AI necessitates the establishment of trustworthy AI (TAI) systems. TAI systems need to be secure, robust, explainable, and unbiased. But how can we ensure these qualities in AI systems? One approach is through Federated Learning (FL). FL is a distributed learning approach that prioritizes privacy by training AI models on decentralized data sources, eliminating the need for centralized data collection. This approach maintains privacy protection while enhancing the accuracy of local AI models. However, Federated Learning is not impervious to attacks. In this presentation, we will explore the concept of TAI and the significance of FL in constructing such systems. We will discuss various attack techniques, including model poisoning and inference attacks, that can compromise the security of FL systems. Additionally, we will present defense techniques such as differential privacy and secure multi-party computation, which can help mitigate these attacks and enhance the trustworthiness of FL. Lastly, we will address the challenges we face in achieving this objective.
About this Lecture
Number of Slides: 60Duration: 45 minutes
Languages Available: English
Last Updated:
Request this Lecture
To request this particular lecture, please complete this online form.
Request a Tour
To request a tour with this speaker, please complete this online form.
All requests will be sent to ACM headquarters for review.