InCabin Europe

7-9 October, 2025

|

Palau de Congressos, Barcelona

|

#incabineurope

Computer Vision for Facial Analysis – Impairment Detection

InCabin

Europe

Tutorial

In this workshop, you will learn how to use state-of-the-art face and voice expressive behaviour AI to detect driver impairment to meet the emerging requirements of Euro-NCAP 2030. Euro-NCAP 2030 is set to both widen and make more rogourous driver impairment detection.

Detection of driving under the influence and sudden sickness with advanced vision and/or biometric sensors will definitely be required. Fatigue will go well beyond the current drowsiness and eyes closed states. It will become important to predict increasing driver fatigue well before someone falls asleep so that interventions other than a complete takeover by an automated system are possible.

Driver Engagement will have to be a lot smarter than it is today. The detection of social, emotional, and medical expressive behaviour may be required. The increasing worry is that whilst people may have their eyes on the road, they are not actually engaged with driving. This negates the effectiveness of purely gaze-based driver engagement assessment systems to inform Vehicle Assistance systems.

In the longer term, systems that can measure cognitive workload estimation, perform stress detection and identify cognitive distraction may be needed. For example, the ability to detect when the driver takes their mind off the driving task due to another mentally demanding task. That may well include social distractions caused by (young) passengers in the front or rear seats, or being overwhelmed by emotion.

The activities that must be detected are also moving towards the social, emotional, and medical: in particular, it will be required to detect talking, singing, and sneezing, among other things. Liveness detection will become a required element of occupancy detection.

In this workshop, I will first explain the relation between face and voice expressive behaviour and the impairments that must be detected. We will then process some in-cabin data using an online tool to see how expressive face and voice behaviour data can be interpreted to automatically detect such impairments.

Hear from:

Passes0
There are no passes in your basket!
0