Advancements in AI for the Automotive Cockpit: Safety, Personalisation, and UX Innovations

Inside the Agenda: AI Innovations for Automotive Safety & UX

Featuring technical sessions with:

& more…

AI-Driven Safety Features: Enhancing Driver and Occupant Monitoring 

Driver and occupant monitoring systems leverage AI to assess driver behavior and occupant conditions in real-time. Convolutional neural networks (CNNs) and transformer-based models analyse video streams from IR and RGB cameras to detect drowsiness, distraction, impairment, and abnormal behaviors. Beyond traditional classification approaches, self-supervised learning techniques are being introduced to enable AI models to improve performance without requiring extensive labeled datasets. 

Automotive AI systems increasingly rely on multi-modal sensor fusion, integrating RGB, IR, radar, ultrasonic, and biometric sensors. AI-driven fusion techniques improve: 

  • Driver recognition and authentication: using cross-domain feature embeddings from face, voice, and gait analysis. 
  • Gesture-based controls: leveraging transformer-based vision models that segment and classify fine-grained hand movements in real time. 
  • Cross-modal reasoning :using self-attention mechanisms to dynamically weigh the reliability of individual sensor inputs, improving decision-making robustness in varying lighting and environmental conditions.

 

Emerging techniques such as graph-based fusion models enable hierarchical decision-making, ensuring that the most relevant sensor modalities contribute to AI-driven conclusions based on situational context. Attention mechanisms in transformer models enhance feature extraction, enabling AI to differentiate between transient distractions (e.g., a driver glancing at the infotainment screen) and sustained inattentiveness. 

Advanced fusion models incorporate physiological signals such as heart rate variability (HRV) and pupil dilation, integrating multi-modal biometric inputs for a more precise determination of cognitive load and fatigue levels. Predictive analytics powered by recurrent neural networks (RNNs) and long short-term memory (LSTM) networks allow for early detection of deteriorating driver states, enabling proactive intervention strategies. 

A key discussion in this area—particularly as we remain in L2(++) scenarios—is reducing cognitive load: how do we ensure that high-definition display screens and advanced interfaces don’t draw attention away from the road for too long? This topic will be explored in the below Panel Discussion ⬇

Panel Discussion: Minimalist Interior, Maximum Risk? Designing for Distraction-Free Driving

Discuss the challenges of balancing sleek, minimalist design, with the need for safe and effective driver engagement, particularly in ADAS-equipped and partially automated vehicle with:

  • Jorge Reynaga, Manager of Applications Engineering at Cirrus Logic Inc
  • Grygorii Maistrenko, Principal Engineer at Mitsubishi Electric Corp
  • Susan Shaw, Lead UX Research Engineer at Ford Motor Company
  • Fabiano Ruaro, Product Manager of Interior Monitoring at Bosch
  • Michael Nees, Associate Professor and Director of the Human Factors, Perception and Cognition Lab at Lafayette College

Fabiano Ruaro, Bosch, says: We’re already using AI algorithms to analyze driver behavior through cameras and other sensors in order to detect signs of drowsiness, distraction, or impairment, triggering alerts or taking corrective actions. In the future, I see AI moving into adaptive learning of driver behavior and preferences while also considering the exterior context, further improving both safety and comfort features.” 

Generative AI for Personalization and Adaptive Interfaces

GenAI is redefining human-vehicle interaction by creating adaptive, context-aware interfaces. Large language models (LLMs) and multi-modal AI systems enable: 

  • Natural language interaction: Transformer-based voice assistants leveraging self-adapting language models trained on automotive-specific datasets, enabling contextual responses tailored to driving conditions. 
  • Personalised in-cabin environments: Reinforcement learning-driven AI systems optimizing HVAC, lighting, and seat positioning by continuously refining user preference profiles based on historical data and physiological signals such as skin temperature and facial expressions. 
  • Predictive assistance: Multi-task learning models anticipating driver needs by cross-referencing real-time environmental data (weather, traffic) with user habits, dynamically adjusting in-cabin systems to enhance comfort and convenience.

 

Fine-tuning pre-trained generative models on in-cabin datasets improves personalization, reducing latency associated with cloud-based processing while preserving user privacy through on-device inference. 

On Wednesday 11th June, speakers from Hyundai Tech Centre, Mobis, and Bosch will explore HMI design for enhanced UX – Bosch focuses specifically on GenAI. See below:

The Evolution of User Experience with a Focus on the Automotive Industry

David Mitropoulos-Rundus

Senior Engineer

Hyundai America Tehnical Center Inc.

Bringing AI Into In-Cabin Warning Systems to Improve the User Experience

Jaeyoung Ko

In-Cabin Monitoring System Engineer

Hyundai MOBIS

Revolutionising Smart Cockpit with Generative AI: Enhancing User Experience

Auston Payyappilly

Sr. Manager Product Management

Bosch

Image Enhancement and AI-Based Perception 

One ongoing debate in the field is whether pre-processing techniques, such as image denoising and super-resolution, are necessary for AI-based perception. Traditional approaches leverage image enhancement pipelines to improve input quality before AI inference, while end-to-end learning methods rely on raw sensor data and neural network adaptability. 

Prof. Valentina Donzella (University of Warwick) will address these considerations in an in-depth, expert tutorial session on Tuesday, June 10th.

*Exclusively for Full Pass Holders

Tutorial: Is Image Enhancement What we Need for AI-Based Perception?

Prof. Valentina Donzella, Professor at the University of Warwick, will present experimental results on the trade-offs between enhancement and end-to-end AI processing in an interactive workshop, discussing the role of image quality and enhancement for both ADAS and OMS/DMS functions.

The Next Step: Immersive Experiences in the Third Space

As AI capabilities expand, the automotive cockpit is evolving into an immersive digital space, transforming travel time into an interactive experience. Advanced augmented reality (AR) overlays, multi-sensory feedback, and AI-driven personalization are reshaping in-cabin entertainment, productivity, and communication. 

One of the most innovative applications is AI-powered AR gaming, where real-world driving environments dynamically integrate with digital elements. Systems like the Valeo Racer leverage sensor fusion and AI-driven perception models to process vehicle surroundings in real time, rendering context-aware gameplay without compromising safety. Gesture recognition, voice commands, and haptic feedback further enhance the interaction, creating a seamless, immersive experience. 

Join Dirk Schulte’s session to learn more about the Valeo Racer. Take a look below:

Keynote: The Valeo Racer – Immersive In-Car Augmented Reality Live Gaming

This keynote session by Dirk Schulte, R&D Director of Advanced Engineering and Product Platforms at Valeo, will explore how Valeo leverages ADAS sensors, real-time perception, and machine learning to deliver an interactive, multi-player gaming experience for passengers—enhancing in-vehicle entertainment while reducing motion sickness.

Interested in exterior sensing technology?

With a pass to InCabin USA, you’ll also gain access to our co-located sister event, AutoSens. Explore the AutoSens Agenda here >>

Check out the Highlights from InCabin Europe 2024 ⬇
Passes0
There are no passes in your basket!
0