Enhancing LLM performance with Cognitive Strategies
Speaker: Sumit Gulwani – Redmond, WA, United StatesTopic(s): Human Computer Interaction , Artificial Intelligence, Machine Learning, Computer Vision, Natural language processing , Software Engineering and Programming , Society and the Computing Profession
Abstract
Large Language Models (LLMs) have emerged as a powerful general-purpose tool capable of performing a wide variety of tasks. However, they are not very precise by themselves. The good news is that LLMs provide a very flexible surface for implementing a variety of cognitive strategies (that underlie human learning, problem solving and creativity) to enable construction of more robust compound-AI systems. This can be achieved through various forms: (a) Prompt engineering techniques that frame the problem as that of associative thinking or analogical reasoning by leveraging smart example selection from an example bank or the spatial/temporal context. (b) Trial-and-error techniques that iteratively refine a candidate or rank multiple candidates generated by the LLM. (c) Workflow-based agent architectures that leverage domain-specific expert strategies and conversational patterns for effective human-AI collaboration. (d) Meta-reflection techniques that can learn from LLM responses and conversations for prompt improvements. I will illustrate these concepts using a variety of tasks including programming from input-output examples or natural-language-based specifications, software-engineering tasks involving code repair or edits, action prediction, and some fuzzy tasks related to providing feedback in educational settings, evaluating conversations, and creating creative content.About this Lecture
Number of Slides: ~75Duration: 60 minutes
Languages Available: English
Last Updated:
Request this Lecture
To request this particular lecture, please complete this online form.
Request a Tour
To request a tour with this speaker, please complete this online form.
All requests will be sent to ACM headquarters for review.