Perceptually inspired deep learning: a low-dimensional visual understanding of deep learning's learning strategy

Speaker:  Shan Suthaharan – Greensboro, NC, United States
Topic(s):  Artificial Intelligence, Machine Learning, Computer Vision, Natural language processing

Abstract

This lecture will discuss the modern deep learning techniques, called the no-drop, the dropout and the dropconnect, in detail using programming examples that can help one to understand the approaches clearly. Stochastic gradient descent is one of the important approaches that help deep learning learn the patterns; hence, this approach will be discussed with simple iterative examples. Mathematical differentiation of matrices is also required; thus, simple examples will be provided to understand its application to deep learning. Deep learning is a parametrized technique and it is modeled using two parameters - often called hyperparameters.  The initial values of these parameters can significantly affect the deep learning models; therefore, a simple approach that enhances the classification accuracy and improve computing performance using perceptual weights is used, and it will be discussed in the lecture. It incorporates edge sharpening filters and their frequency responses for the classifier and the connector parameters of the deep learning models. Therefore, it is called the perceptually inspired deep learning framework, and it preserves class characteristics and regularizes the deep learning model parameters. This talk will discuss all of these approaches by highlighting their importance.

About this Lecture

Number of Slides:  40
Duration:  80 minutes
Languages Available:  English
Last Updated: 

Request this Lecture

To request this particular lecture, please complete this online form.

Request a Tour

To request a tour with this speaker, please complete this online form.

All requests will be sent to ACM headquarters for review.