Exploring Safety Risks in Large Language Models and Generative AI
Speaker: Pin-Yu Chen – WHITE PLAINS, NY, United StatesTopic(s): Artificial Intelligence, Machine Learning, Computer Vision, Natural language processing , Security and Privacy , Information Systems, Search, Information Retrieval, Database Systems, Data Mining, Data Science
Abstract
Large language models (LLMs) and Generative AI (GenAI) are at the forefront of current AI research and technology. With their rapidly increasing popularity and availability, challenges and concerns about their misuse and safety risks are becoming more prominent than ever. In this talk, I will provide new tools and insights to explore the safety and robustness risks associated with state-of-the-art LLMs and GenAI models. In particular, I will cover (i) safety risks in fine-tuning LLMs, (ii) backdoor analysis of text-to-image diffusion models, (iii) prompt engineering for safety debugging, and (iv) robust detection of AI-generated text from LLMs.About this Lecture
Number of Slides: 30 - 50Duration: 30 - 90 minutes
Languages Available: Chinese (Traditional)
Last Updated:
Request this Lecture
To request this particular lecture, please complete this online form.
Request a Tour
To request a tour with this speaker, please complete this online form.
All requests will be sent to ACM headquarters for review.