MultiCon: A Multi-Contrastive Learning based Semi-Supervised Classification Framework and Its Applications Towards Covid19

Speaker:  Latifur Rahman Khan – Plano, TX, United States
Topic(s):  Artificial Intelligence, Machine Learning, Computer Vision, Natural language processing

Abstract

Deep neural networks (DNN) require a large number of annotations, which sometimes is very expensive and cumbersome. Over the years, various efforts have been proposed for reducing the annotation cost when training the DNN. Semi-Supervised Learning (SSL) is one of the solutions that has been provably handy in leveraging unlabeled instances to mitigate the efficacy of the model’s performance and has been attracting an increasing amount of attention in recent times. In this work, our main insight is that semi-supervised learning can benefit from recently proposed unsupervised contrastive learning approach, which aims to achieve the positive concentrated and negative separated representation in the unlabeled feature space. Herein, we introduce MultiCon, a semi-supervised learning paradigm that aims at learning data augmentation invariant based embedding. In particular, we combine the multi-contrastive learning approach with a consistency regularization method for maximizing the similarity between differently augmented views of one sample and pushing the embedding of different instances away in the latent space simultaneously. Experiments on multiple standard datasets including Covid19 Chest X-ray images and CT Scans demonstrate that MultiCon achieves state-of-the-art performance across existing SSL benchmarks. 

About this Lecture

Number of Slides:  71
Duration:  60 minutes
Languages Available:  English
Last Updated: 

Request this Lecture

To request this particular lecture, please complete this online form.

Request a Tour

To request a tour with this speaker, please complete this online form.

All requests will be sent to ACM headquarters for review.