Autonomous vehicles and super-perception: what happens next?

Autonomous vehicles need to be safe, but to gain public acceptance they will also need to make progress in busy traffic and that means driving assertively. They need the ability to anticipate what is likely to happen next in a scene, and this remains a critical unsolved challenge. At FiveAI, we rely on novel AI and machine learning to accurately determine what is happening in a given scene, including micro-indicators such as gestures, pose and gaze. This super-perception helps predict how agents are likely to interact, and to plan risk-aware routes. We’ll explain the AV problem space and some of the techniques we apply in our quest to build a safe and convenient autonomous mobility service.

Required audience experience: Entry-level/popular knowledge of AI and machine learning concepts would be helpful, but is not essential.

Objective of the talk: This talk will provide attendees with a working knowledge of one of the most difficult challenges in AV control, namely the problem of how they can know what a road user is going to do next. FiveAI will explain the challenges and then show how AI and machine learning is helping to solve this problem

Keywords: autonomous vehicles, AI, machine learning, super-perception, future mobility

Track 1
Location:   Date: October 10, 2017 Time: 10:45 am – 11:30 am Stan Boland, FiveAI