Professor Zhangyang “Atlas” Wang is currently the Jack Kilby/Texas Instruments Endowed Assistant Professor in the Chandra Family Department of Electrical and Computer Engineering at The University of Texas at Austin. He is also a faculty member of the UT Computer Science Graduate Studies Committee (GSC), and the Oden Institute CSEM program. Meanwhile, in a part-time role, he serves as the Director of AI Research & Technology for Picsart, developing the next-generation AI-powered tools for visual creative editing. During 2021 - 2022, he held a visiting researcher position at Amazon Search. From 2017 to 2020, he was an Assistant Professor of Computer Science and Engineering, at the Texas A&M University. He received his Ph.D. degree in ECE from UIUC in 2016, advised by Professor Thomas S. Huang; and his B.E. degree in EEIS from USTC in 2012.
Prof. Wang has broad research interests spanning from the theory to the application aspects of machine learning (ML). At present, his core research mission is to leverage, understand and expand the role of sparsity, from classical optimization to modern neural networks, whose impacts span over many important topics such as efficient training/inference/transfer (especially, of large foundation models), robustness and trustworthiness, learning to optimize (L2O), generative AI, and graph learning. His research is gratefully supported by NSF, DARPA, ARL, ARO, IARPA, DOE, as well as dozens of industry and university grants. He is/was an elected technical committee member of IEEE MLSP and IEEE CI; an associate editor of IEEE TCSVT (receiving the 2020 Best Associate Editor Award); and regularly serves as area chairs, invited speakers, tutorial/workshop organizers, various panelist positions and reviewers. He is an ACM Distinguished Speaker and an IEEE senior member.
Prof. Wang has received many research awards and scholarships, including an NSF CAREER Award, an ARO Young Investigator Award, an IEEE AI's 10 To Watch Award, an INNS Aharon Katzir Young Investigator Award, an IBM Faculty Research Award, a J. P. Morgan Faculty Research Award, an Amazon Research Award, an Adobe Data Science Research Award, a Meta Reality Labs Research Award, and two Google TensorFlow Model Garden Awards. His team has recently won the Best Paper Award from the inaugural Learning on Graphs (LoG) Conference 2022; and has also won five research competition prizes from CVPR/ICCV/ECCV since 2018. He feels most proud of being surrounded by some of the world's most brilliant students: his current Ph.D. students include winners of five prestigious fellowships (IBM, Apple, Adobe, Qualcomm, and Snap), among many other honors.
To request a single lecture/event, click on the desired lecture and complete the Request Lecture Form.
Learning to Optimize: A Gentle Introduction
Learning to optimize (L2O) is an emerging approach that leverages machine learning to develop optimization methods, aiming at reducing the laborious iterations of hand...
Sparse Neural Networks: From Practice to Theory
A sparse neural network (NN) has most of its parameters set to zero and is traditionally considered as the product of NN compression (i.e., pruning). Yet recently,...
To request a tour with this speaker, please complete this online form.
If you are not requesting a tour, click on the desired lecture and complete the Request this Lecture form.
All requests will be sent to ACM headquarters for review.