Neuromorphic Computing and Deep Learning - Technology, Hardware and ImplementationSpeaker: Hai Li – Durham, NC, United States
Topic(s): Computational Theory, Algorithms and Mathematics
AbstractAs big data processing becomes pervasive and ubiquitous in our lives, the desire for embedded-everywhere and human-centric information systems calls for an "intelligent" computing paradigm that is capable of handling large volume of data through massively parallel operations under limited hardware and power resources. This demand, however, is unlikely to be satisfied through the traditional computer systems whose performance is greatly hindered by the increasing performance gap between CPU and memory as well as the fast-growing power consumption. Inspired by the working mechanism of human brains, a neuromorphic system naturally possesses a massively parallel architecture with closely coupled memory, offering a great opportunity to break the "memory wall" in von Neumann architecture. In this talk, I will start with the expectation on the memristor technology based neuromorphic computing systems, followed by the discussion on the requirements and challenges in hardware design and system implementation. The latest research outcomes on hardware implementation optimization, the reliability and robustness control schemes, and new training methodologies by taking the hardware constraints into the consideration will then be presented. At last, I will discuss the optimization on conventional platform such as FPGA.
About this LectureNumber of Slides: 82
Duration: 100 minutes
Languages Available: English
Request this Lecture
To request this particular lecture, please complete this online form.
Request a Tour
To request a tour with this speaker, please complete this online form.
All requests will be sent to ACM headquarters for review.