Efficacy, Efficiency, and Security of Code LLMs: Promises and Perils
Speaker: David Lo – Singapore, SingaporeTopic(s): Artificial Intelligence, Machine Learning, Computer Vision, Natural language processing , Software Engineering and Programming
Abstract
Researchers have explored methods to automate software engineering (ASE) tasks for decades. In recent years, we have been excited about the potential of code Large Language Models (code LLMs) for ASE tasks. This lecture will discuss code LLMs' efficacy, efficiency, and security. Specifically, it will introduce ‘VulMaster,’ which doubles the efficacy of vulnerability repair—a crucial and challenging task for software engineers—through LLM collaboration and data-centric innovations. It will also discuss ‘Avatar,’ designed to improve the efficiency of code LLMs, reducing the model size by 160× and significantly decreasing energy consumption (up to 184× less), carbon footprint (up to 157× less), and inference latency (up to 76× faster), with only a negligible loss in effectiveness (1.67% on average). Lastly, the lecture will present ‘AFRAIDOOR,’ a stealthy backdoor attack on code models that can achieve a 7x higher success rate than state-of-the-art approaches after defense. The lecture will conclude with a brief description of future work and open challenges towards fully realizing next-generation software engineering (aka. "Software Engineering 2.0").About this Lecture
Number of Slides: 60 - 80Duration: 60 - 75 minutes
Languages Available: English
Last Updated:
Request this Lecture
To request this particular lecture, please complete this online form.
Request a Tour
To request a tour with this speaker, please complete this online form.
All requests will be sent to ACM headquarters for review.