Dr. Naveed AkhtarBased in Perth, WN, Australia
Naveed Akhtar is with the Department of Computer Science & Software Engineering at The University of Western Australia. He is among the prominent researchers of Western Australia (WA) in Artificial Intelligence, Deep Learning and Computer Vision. He has been WA’s Early Career Scientist of the Year 2021 Finalist, top 10 young researcher in Formal Sciences according to the Universal Scientific Education and Research Network (USERN), and a top 2% scientist in all fields for the year 2020 according to the research led by Stanford University. Dr. Akhtar is a recipient of a prestigious fellowship by the Australian Office of National Intelligence. He also serves/d as a co-Chief Investigator on multiple US Department of Defence DARPA research projects. He is an Associate Editor of IEEE Access and Guest Editor for Neural Computing and Applications and Remote Sensing journals. He serves as a reviewer and program committee member for over 20 reputed research sources, including Nature Machine Intelligence, IEEE Trans. Pattern Analysis and Machine Intelligence (TPAMI), IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE Int. Conf. on Computer Vision (ICCV) and AAAI Conf. on Artificial Intelligence. He also serves/d as an Area Chair for CVPR 2022, which is the highest h5-index research source in the broad field of Engineering and Computer Science. Dr. Akhtar’s research outcomes regularly appear in the leading sources of his field, including TPAMI, TNNLS, IJCV, CVPR and ECCV. He has published over 50 research papers so far. His research interests are in machine learning, computer vision, adversarial deep learning, hyperspectral imaging, point cloud analysis, human action recognition and image and video captioning.
To request a single lecture/event, click on the desired lecture and complete the Request Lecture Form.
Adversarial attacks on deep learning: fooling and beyond
Deep learning is a key technology in Artificial Intelligence. It allows us to learn complex mathematical functions directly from data. Provided an input, these functions can predict outputs often...
Focusing on non-experts of the field, this talk introduces how it is possible to deceive state-of-the-art AI systems. In particular, the talk focuses on visual models that are able to look at...
Explaining Deep Learning with Adversarial Attacks
Deep visual models are susceptible to adversarial perturbations to inputs. Although these signals are carefully crafted, they still appear noise-like patterns to humans. This observation has led...
To request a tour with this speaker, please complete this online form.
If you are not requesting a tour, click on the desired lecture and complete the Request this Lecture form.
All requests will be sent to ACM headquarters for review.