Inside the Agenda: AI Innovations for Automotive Safety & UX
Featuring technical sessions with:
& more…
AI-Driven Safety Features: Enhancing Driver and Occupant MonitoringÂ
Driver and occupant monitoring systems leverage AI to assess driver behavior and occupant conditions in real-time. Convolutional neural networks (CNNs) and transformer-based models analyse video streams from IR and RGB cameras to detect drowsiness, distraction, impairment, and abnormal behaviors. Beyond traditional classification approaches, self-supervised learning techniques are being introduced to enable AI models to improve performance without requiring extensive labeled datasets.Â
Automotive AI systems increasingly rely on multi-modal sensor fusion, integrating RGB, IR, radar, ultrasonic, and biometric sensors. AI-driven fusion techniques improve:Â
- Driver recognition and authentication: using cross-domain feature embeddings from face, voice, and gait analysis.Â
- Gesture-based controls: leveraging transformer-based vision models that segment and classify fine-grained hand movements in real time.Â
- Cross-modal reasoning :using self-attention mechanisms to dynamically weigh the reliability of individual sensor inputs, improving decision-making robustness in varying lighting and environmental conditions.
Â
Emerging techniques such as graph-based fusion models enable hierarchical decision-making, ensuring that the most relevant sensor modalities contribute to AI-driven conclusions based on situational context. Attention mechanisms in transformer models enhance feature extraction, enabling AI to differentiate between transient distractions (e.g., a driver glancing at the infotainment screen) and sustained inattentiveness.Â
Advanced fusion models incorporate physiological signals such as heart rate variability (HRV) and pupil dilation, integrating multi-modal biometric inputs for a more precise determination of cognitive load and fatigue levels. Predictive analytics powered by recurrent neural networks (RNNs) and long short-term memory (LSTM) networks allow for early detection of deteriorating driver states, enabling proactive intervention strategies.Â
A key discussion in this area—particularly as we remain in L2(++) scenarios—is reducing cognitive load: how do we ensure that high-definition display screens and advanced interfaces don’t draw attention away from the road for too long? This topic will be explored in the below Panel Discussion ⬇
Panel Discussion: Minimalist Interior, Maximum Risk? Designing for Distraction-Free Driving
Wednesday 11th June  |  5:20pm EDT  |  InCabin Exhibition Stage
Discuss the challenges of balancing sleek, minimalist design, with the need for safe and effective driver engagement, particularly in ADAS-equipped and partially automated vehicle with:
- Jorge Reynaga, Manager of Applications Engineering at Cirrus Logic Inc
- Grygorii Maistrenko, Principal Engineer at Mitsubishi Electric Corp
- Susan Shaw, Lead UX Research Engineer at Ford Motor Company
- Fabiano Ruaro, Product Manager of Interior Monitoring at Bosch
- Michael Nees, Associate Professor and Director of the Human Factors, Perception and Cognition Lab at Lafayette College
Fabiano Ruaro, Bosch, says: “We’re already using AI algorithms to analyze driver behavior through cameras and other sensors in order to detect signs of drowsiness, distraction, or impairment, triggering alerts or taking corrective actions. In the future, I see AI moving into adaptive learning of driver behavior and preferences while also considering the exterior context, further improving both safety and comfort features.”Â
Generative AI for Personalization and Adaptive Interfaces
GenAI is redefining human-vehicle interaction by creating adaptive, context-aware interfaces. Large language models (LLMs) and multi-modal AI systems enable:Â
- Natural language interaction: Transformer-based voice assistants leveraging self-adapting language models trained on automotive-specific datasets, enabling contextual responses tailored to driving conditions.Â
- Personalised in-cabin environments: Reinforcement learning-driven AI systems optimizing HVAC, lighting, and seat positioning by continuously refining user preference profiles based on historical data and physiological signals such as skin temperature and facial expressions.Â
- Predictive assistance: Multi-task learning models anticipating driver needs by cross-referencing real-time environmental data (weather, traffic) with user habits, dynamically adjusting in-cabin systems to enhance comfort and convenience.
Â
Fine-tuning pre-trained generative models on in-cabin datasets improves personalization, reducing latency associated with cloud-based processing while preserving user privacy through on-device inference.Â
On Wednesday 11th June, speakers from Hyundai Tech Centre, Mobis, and Bosch will explore HMI design for enhanced UX – Bosch focuses specifically on GenAI. See below:
Image Enhancement and AI-Based PerceptionÂ
One ongoing debate in the field is whether pre-processing techniques, such as image denoising and super-resolution, are necessary for AI-based perception. Traditional approaches leverage image enhancement pipelines to improve input quality before AI inference, while end-to-end learning methods rely on raw sensor data and neural network adaptability.Â
Prof. Valentina Donzella (University of Warwick) will address these considerations in an in-depth, expert tutorial session on Tuesday, June 10th.
*Exclusively for Full Pass Holders
Tutorial: Is Image Enhancement What we Need for AI-Based Perception?
Tuesday 10th June  |  2:00pm EDT  |  Room 140 C-D
Prof. Valentina Donzella, Professor at the University of Warwick, will present experimental results on the trade-offs between enhancement and end-to-end AI processing in an interactive workshop, discussing the role of image quality and enhancement for both ADAS and OMS/DMS functions.
The Next Step: Immersive Experiences in the Third Space
As AI capabilities expand, the automotive cockpit is evolving into an immersive digital space, transforming travel time into an interactive experience. Advanced augmented reality (AR) overlays, multi-sensory feedback, and AI-driven personalization are reshaping in-cabin entertainment, productivity, and communication.Â
One of the most innovative applications is AI-powered AR gaming, where real-world driving environments dynamically integrate with digital elements. Systems like the Valeo Racer leverage sensor fusion and AI-driven perception models to process vehicle surroundings in real time, rendering context-aware gameplay without compromising safety. Gesture recognition, voice commands, and haptic feedback further enhance the interaction, creating a seamless, immersive experience.Â
Join Dirk Schulte’s session to learn more about the Valeo Racer. Take a look below:
Keynote: The Valeo Racer – Immersive In-Car Augmented Reality Live Gaming
Wednesday 11th June  |  11:45am EDT  |  InCabin Exhibition Stage
This keynote session by Dirk Schulte, R&D Director of Advanced Engineering and Product Platforms at Valeo, will explore how Valeo leverages ADAS sensors, real-time perception, and machine learning to deliver an interactive, multi-player gaming experience for passengers—enhancing in-vehicle entertainment while reducing motion sickness.
Interested in exterior sensing technology?
With a pass to InCabin USA, you’ll also gain access to our co-located sister event, AutoSens. Explore the AutoSens Agenda here >>
