Hardware Acceleration for Big Data Era

Speaker:  Yu Wang – Beijing, China
Topic(s):  Architecture, Embedded Systems and Electronics, Robotics


Integration of more processing element and memory is an important way to integrate more transistors, so that one single IC can have more functions. However, how to map different applications to multi/many core system or directly to transistors (by FPGA or ASIC), and then make these silicon work in a more efficient way brings us opportunities to research in the application specific hardware computing area. The speaker will introduce the design focusing on some basic key operations: matrix operations, graph theoretical algorithms, and etc. The speaker categories the research according to different applications or computing frameworks including Search (collaborated by Microsoft), Time Series Data Mining (collaborated with IBM and Huawei), Vision Related Acceleration and UAV, and Programming models and Platforms. Each application can be detailed or putting together to make a general introduction to all the accelerator designs. Domain accelerators and design frameworks can be introduced include (1) acceleration of time series data mining by summarizing the computation and data dependency patterns and utilizing the gate-level fine-grained parallelism of FPGA (used in IBM cloud); (2) real-time high-accuracy image processing systems like stereo vision and vision-based localization; (3) FPMR (FPGA based MapReduce) and FPGP (FPGA based Graph Processing) frame work to reduce the difficulty of programming FPGA, which provides programming abstraction, hardware architecture, and basic building blocks to developers; (4) how to integrate FPGA into Clouds (used in IBM cloud).

About this Lecture

Number of Slides:  100
Duration:  60 - 90 minutes
Languages Available:  English
Last Updated: 

Request this Lecture

To request this particular lecture, please complete this online form.

Request a Tour

To request a tour with this speaker, please complete this online form.

All requests will be sent to ACM headquarters for review.