Building a Better Self-Driving Car: Hardware, Software, and Knowledge
Lyft’s mission is to improve people’s lives with the world’s best transportation. Self driving vehicles have the potential to deliver unprecedented improvements to safety and quality, at a price and convenience that challenges traditional models of vehicle ownership. A combination of hardware, software, and knowledge technologies are needed to build self-driving cars. In this talk, I’ll present the core problems in self-driving and how recent advances in computer vision, robotics, and machine learning are powering this revolution. The car is carefully designed with a variety of sensors that complement each other to address a wide variety of driving scenarios. Sensor fusion bring all of these signals together into an interpretable AI engine comprising of perception, prediction, planning, and controls. For example, deep learning models and large scale machine learning have closed the gap between human and machine perception. In contrast, predicting the behavior of other humans and effectively planning and negotiating maneuvers continue to be hard problems. Combining AI technologies with deep knowledge about the real world is key to addressing these.
Kumar Chellapilla leads knowledge efforts for self-driving at Level 5, Lyft’s self driving division. Bringing knowledge to self-driving vehicles is done by using a large scale fleet and modeling the world via a combination of HD maps and large scale scenarios that capture human behavior. Prior to Level 5, he worked at Uber ATG and led teams that worked on offboard perception, machine learning and machine teaching for autonomy & AV maps. Before self-driving, he also worked on applying machine learning techniques to improve search, recommendations, and advertising products at LinkedIn, Twitter, and Bing.
Kumar has a Ph.D from the University of California at San Diego wherein he worked on teaching computers to learn by themselves to play games like chess and checkers and control trucks to back up to loading docks. After graduation, he spent five years at Microsoft Research working on computer vision and pattern recognition techniques for OCR, document processing, and text recognition in camera images.