Deep Learning for Perception in Autonomous Vehiclesposted on 16 May 2019
Deep Learning for Perception in Autonomous Vehicles
Date/Time: 8 May 2019 (Wednesday), 6.00pm
Venue: NUS Faculty of Engineering, Lecture Theatre 1 (LT1)
Autonomous vehicles are a breakthrough technology expected to make our lives safer and more comfortable. Data-driven deep learning approaches are essential to perceive the real world. Current datasets lack in complexity and size to train such methods. Therefore we present the nuScenes dataset – the largest existing multimodal dataset for autonomous driving. The goal of nuScenes is to establish a common benchmark for industry and academia. Furthermore we present the state-of-the-art PointPillars method for 3d object detection from lidar. Finally, we discuss some of the open challenges in the field today.
About the Speaker
Holger Caesar is a Senior Research Scientist at nuTonomy, an Aptiv company. Working in Singapore on the Machine Learning Team, his job is to make autonomous vehicles perceive and understand their environment. At nuTonomy, he is the project lead for the nuScenes autonomous driving dataset and contributed to the state-of-the-art PointPillars method for object detection from Lidar. He previously did his PhD in Computer Vision under the supervision of Prof. Vittorio Ferrari at the University of Edinburgh and ETH Zurich. Holger released the COCO-Stuff dataset and several methods for fully and weakly supervised semantic segmentation. He also co-organized the COCO+Mapillary and WAD workshops at ECCV and CVPR conferences.
Light dinner will be served.
Please register your attendance at the following link by 2 May 2019 here.
Back to Announcements