Mobile Visual CrowdsensingSpeaker: Qi Han – Golden, CO, United States
Topic(s): Networks and Communications
AbstractMobile visual crowdsensing (MVCS) uses built-in cameras of smart devices to capture the details of interesting objects/views in the real world in the form of pictures or videos. It has attracted considerable attention recently due to the rich information that can be provided by images and videos. MVCS is useful and in many cases superior to traditional visual sensing that relies on deployment of stationary cameras for capturing images or videos. In this talk, I will first describe several building blocks for a cooperative visual sensing and sharing system: event localization, efficient picture stream segmentation and sub-event detection based on crowd-event interaction patterns, and picture selection for event highlights using crowd-subevent entropy of pictures. I will then present how MVCS is used for CrowdNavi: a mobile app we developed for last-mile outdoor navigation for pedestrians.
About this LectureNumber of Slides: 40
Duration: 50 minutes
Languages Available: English
Request this Lecture
To request this particular lecture, please complete this online form.
Request a Tour
To request a tour with this speaker, please complete this online form.
All requests will be sent to ACM headquarters for review.