Powering the Future of Efficient AI through Approximate and Analog In-Memory Computing Principals
Speaker: Kaoutar El Maghraoui – Yorktown Heights, NY, United StatesTopic(s): Artificial Intelligence, Machine Learning, Computer Vision, Natural language processing
Abstract
Artificial Intelligence has transformed nearly every business process and industry, from finance, healthcare, and energy to supply chain optimization, sales, marketing, and HR. The significant investments in AI have led to a super-linear growth in AI data, models, and infrastructure capacity. We are also at an inflection point with the emergence of large-scale generalizable and adaptable foundation models. This self-accelerated growth in AI leads to increased size and complexity of AI models, Exascale computational demands, and a much-increased carbon footprint. To address these challenges, there is a Cambrian explosion of innovative AI hardware-accelerator architectures optimized for deep learning and machine learning across cloud and edge platforms. Purpose-built hardware shifts the traditional balances between cloud and edge, structured and unstructured data, and training and inference. This lecture uncovers the evolving landscape of specialized AI hardware. It highlights a suite of techniques and algorithms for the design & build of efficient deep learning such as approximate computing principals and Analog non-Von-Neumann approaches to unlock exponential gains of AI computations making AI faster, more efficient, and sustainable.
About this Lecture
Number of Slides: 55Duration: 60 minutes
Languages Available: English
Last Updated:
Request this Lecture
To request this particular lecture, please complete this online form.
Request a Tour
To request a tour with this speaker, please complete this online form.
All requests will be sent to ACM headquarters for review.