Deep Learning for Multiple Object Tracking
Speaker: Ajmal Saeed Mian – Crawley, WN, AustraliaTopic(s): Artificial Intelligence, Machine Learning, Computer Vision, Natural language processing
Abstract
Multiple Object Tracking (MOT) plays an important role in solving many fundamental problems in video analysis. MOT involves object detection followed by object association. While object detection made tremendous progress soon after the advent of deep learning, data associ-ation still relied on hand crafted constraints to compute affinities between the objects in different frames. We proposed Deep Affinity Network (DAN) that harnesses the power of deep learning for data association by jointly modeling object appearances and their inter-frame affinities in an end-to-end fashion. DAN achieved state of the art results on MOT15, MOT17 and UA-DETRAC bench-marks. Moving further, we alleviate MOT from relying on external detectors and proposed Deep Motion Modeling Network (DMM-Net) that performs object detection and tracking in one go. DMM-Net models object features over multiple frames and simultaneously infers object classes, visibility and their motion parameters to output tracklets. DMM-Net achieved PR-MOTA score of 12.80 @ 120+ fps on the UA-DETRAC challenge - which was better performance and orders of magnitude faster than competitors. Finally, we applied DAN to point cloud data and proposed PC-DAN which was the top performer on the JackRabbot 3D Tracking Leaderboard. Our methods are published in PAMI, ECCV and CVPRW.About this Lecture
Number of Slides: 45Duration: 40 minutes
Languages Available: English
Last Updated:
Request this Lecture
To request this particular lecture, please complete this online form.
Request a Tour
To request a tour with this speaker, please complete this online form.
All requests will be sent to ACM headquarters for review.