InCabin Europe will bring together academics, industry leaders and innovators to share knowledge and exchange ideas on six key topics and challenges facing the automotive industry.
Here we look at 2 of the tracks in advance of the event to consider some of the ideas which may be discussed by delegates at the event.
Elevating User Experience through HMI & Design
HMI Design in an automotive cabin is one of the most challenging environments for any designer with a combination of perceived quality, value for money, diversity, and safety requirements.
For most end users a vehicle will be the second most expensive object they buy in their lifetime, after their home and before their household appliances. Whether it’s a low end or prestige model a vehicle is perceived to be a high-cost item and as such users expect high quality.
Competition with smart phones has further increased user’s expectations around design and functionality in-cabin. Consumers compare the functionality available in the latest £800 iPhone 15 with the technology in their car which will cost at least 10 times more.
User diversity in-cabin is greater than most consumer products. Automotive UI must be fashionable and reactive for young users, it must also be clear and simple to use for older users; let’s not forget that we will all be older users in a few years’ time. Anyone who has watched an older relative struggle to migrate from a feature phone (or “brick”) to a smartphone will realise the huge difference in cognitive load required to navigate a touch screen UI rather than tactile buttons. As well as age diversity, language diversity impacts design any symbol must be clear and text must fit in the same space whether it’s in English, Русский, 普通话 or हिन्दी.
Safety is the design constraint which auto makers think about most, they recognise that driving a car is the one of the most complex tasks carried out by most people. The rise in driver assistance features is intended to reduce the cognitive load of the driving task. At the same time, the explosion in the number of in-cabin features has added many more tasks demanding attention from the finite cognitive capability of the driver.
With the increasing number of technologies and features in cars, it’s essential to ensure that interfaces are safe, easy to use, and provide a great user experience. The “Mastering UX-HMI: Evaluation, Design & Development of Automotive Interfaces” tutorial at InCabin Europe will explore the balance between physical and digital controls in automotive cabins, focusing on benchmarking methods and design principles. The course will share practical insights into creating user-centric interfaces that enhance both safety and user experience.
Driver & Occupant Monitoring Systems (DMS & OMS), Large Language Models and Generative AI bring great opportunities to improve safety and enhance usability but also add their own challenges to in-cabin UX & HMI Design. These systems also increase the number of diversity factors which should be considered. Prior to DMS diversity focussed on user behaviour and physical dimensions. DMS focus on correctly monitoring facial expressions meaning that variations in hair length, skin tone, facial scarring paralysis and use of makeup become additional diversity factors.
A major challenge with Driver Monitoring Systems is the integration of the sensors into the vehicle. First challenge is the large variation in the physical size of drivers, a good sensor position for taller Europeans may not work for users in Asia[1]. Aesthetics are also a major consideration; auto makers strive for a clean uniform A-surface within the cabin as part of the overall in cabin UX. At InCabin Europe 2024 in Barcelona, ABI Research will present their findings around the physical integration of Driver Monitoring sensors in the cabin. As well as DMS sensor positioning, adaptive restraints also have to adapt to the physical size of the driver, these are another topic discussed at InCabin Europe.
Driver Monitoring Systems entered the cabin as a safety sensor to detect drowsy drivers. Driver fatigue is a factor in up 10 – 25% of all road crashes in the European Union[2]. As a result of this the European Union made Driver Drowsiness and Attention Warning (DDAW) mandatory since 2022, similar legislation is in place around the world. Creating a user experience which alerts a driver to act, without scaring them which could cause unexpected reactions is not easy. Many users will never experience this but when they do the UX & HMI design has to be clear and compelling to trigger the desired driver reaction. Considering the reaction times and trust in technology across the wide demographic of car users this is no small task.
DDAW was introduced by the European Union to reduce the number of accidents on roads. Driver trust in such systems is essential, excessive false alerting rates breed distrust in users, which undermines the safety benefit, warnings are ignored, or the functions are disabled out of frustration. Seeing Machines will discuss the cost of errors and helps to define what “good enough” sensing really means, and the technologies that are capable of providing it.
With smart design simple audio & visual warnings can be replaced to enhance the driving experience.Informing a driver that they are tired may not always get the desired reaction, human beings are unpredictable when fatigued. Rather than an alert, an incentive or offer could be created to trigger the desired change in behaviour. McDonalds or Starbucks could partner with car makers to offer a coupon for free coffee to drowsy drivers. A basic DDAW alarm could be enhanced to a gentler rewarding user journey as the air conditioning drops a few degrees, the music turns up and navigation routes them to a safe to place with a free coffee waiting for them. In a world where OEMs are trying to pass cost on to users through subscriptions, this could provide a revenue stream to OEMs to fund a safety improvement for all users.
Going beyond basic safety warnings Driver Monitoring Systems and wider Occupant Monitoring Systems have the potential to enhance our user experience. In cabin Driver Monitoring Systems can be used to monitor whether a driver is happy or frustrated when using a feature in their vehicle. This data can be used by auto makers to rethink features which are causing frustration to drivers and push updates over the air. This is a topic that Blueskeye AI will explore as part of their presentation at InCabin Europe 2024 in Barcelona.
All too often new technologies can deliver improved functionality at the cost of users having to learn a new method of interacting with the machine. Users of ChatGPT and other AI Assistants soon learn about “prompt engineering”, the skill of forming your query to achieve the desired output. At InCabin Europe TomTom will present their thoughts on how technology should adapt to people to improve the in-cabin navigation experience, not the other way around.
Going beyond today’s personal cars to Shared Autonomous Vehicles (SAVs) in the future will bring new user experience challenges. At InCabin Europe the Royal College of Art, London, will explore main areas where design intervention in the interiors of SAVs could create more inclusive journey experiences, including adaptability, safety & security, and navigability & familiarity.
Unleashing the AI Revolution in In-Cabin Technology
Artificial intelligence has been constrained to the cloud for many years. Increased computing power available on “the edge” in devices embedded in the vehicle now makes it possible to fully embrace the AI Revolution.
One of the first examples of the AI Revolution in vehicles were early AI based voice assistants. They could do many things we take for granted today. They had one major drawback; they only ran in the cloud. Although car connectivity is good in metropolitan areas, there are still large parts of the world where there is no connectivity. The automotive connectivity challenge is best illustrated in a traffic jam on a rural highway. As drivers start to use cloud-based services to ask for assistance the load on the cellular network increases until the network can’t cope. At the point where the driver really needed help from their AI Assistant it would stop working. The driver would be forced to use the embedded/edge alternatives, often these simpler algorithms required the driver to rethink their query into simpler tasks that the algorithm could manage. At the very time you needed a smart assistant it let you down and you were forced to use an interface that was unfamiliar and confusing.
Another constraint of early AI based voice assistants was their ability to handle user-specific, real-time or constantly changing “Big Data” datasets. Combining the flexibility of a large language model (LLM) trained with hours of real-world data with a few text strings from a user’s phonebook or calendar to create a unified user experience was challenging.
SoundHound AI have been market leaders in using the availability of greater resources on the edge to deliver a voice assistant which combines real-time data and works whether the vehicle is connected to the cloud or not. Stellantis’ DS vehicles were the first to deliver this technology to end users.
The challenge of integrating constantly changing Big Data with Large Language Models has been investigated by TomTom working with Microsoft. Their approach has focussed on enabling addressing complex use cases through the combination of AI and Big Data.
Artificial Intelligence is also being used to better understand the in-cabin environment. In-cabin occupants are aware of the environment around the car and have a level of self-awareness. As a human co-pilot this would allow you to modify how you interact with the driver depending upon speed, driving conditions, the driver’s mood, and other factors. Until recently in-cabin systems would have a standard interaction model which would not take account of these factors. Advances in DMS allow the possibility for in-cabin systems to take these into account.
Cipia will explore some of the new use cases and value computer vision AI can bring in the cabin. They will consider how this may transform the traditional business model into a world of services, transactions, and even bring public benefits.
Fraunhofer IOSB are investigating how to combine the use of LLM and Vision Language Models (VLM) to create a holistic model of the situation in the cabin. Such a model brings us to close to giving systems in the cabin the same view as the human occupants.
Ensuring new AI systems are inclusive is another area where we can learn from early developments with voice assistants. Those early systems were derived from military systems where the language models were trained on data from a single sensor using trained users who would follow the manual as they had been trained to do. As these systems were rolled out to the general population many problems were due to real world user behaviour which unlike the trained users the models had been trained with. These could lead to scenarios where the user was not understood, false positives, or false negatives. In many drivers would stop using the system if the error rate was too high. OEMs tried invested in extensive data collection and user testing of these systems to improve their performance in the real world. This approach becomes increasingly expensive as you try to account for greater user diversity.
LGe shall present their work on use of LLMs to combine inputs from various in-cabin sensors (Camera, Microphone, IMU, Gas Sensor, Radar, etc.) to comprehend complex patterns and interactions that single-modality models cannot achieve. Combining these sensors improves the reliability and reduces the error rate of Driver Monitoring Systems. The use of additional sensors also enables new use cases such as alcohol impaired driving detection and health monitoring.
Artificial Intelligence can be used to generate test data sets for algorithms which generate variants based upon real world human factors data. Google and Anyverse will present their work to develop in-cabin scenarios which can be used to test new algorithms and shorten development cycles. They will present an approach which generates a high-quality synthetic data to perfect system perception, accelerate development, and achieve AI model performance targets.
Diversity & inclusiveness of AI will be considered by one of the panels at InCabin Europe who will discuss: “Leaving no-one behind: leveraging AI to enhance access to complex functionality”. Accessibility across the globe as well as inclusion of individuals will be considered by the panel. Delegates will also have an opportunity to interact and inject their own ideas into the panel discussion and after at the evening reception.
Summary
Driver & Occupant Monitoring Systems combined with the AI revolution allows auto makers to understand their customers better to deliver enhanced user experiences which adapt in real-time to suit the needs of their customers and the environment around them.
Auto makers working with governments are working hard to embrace these technologies to improve occupant safety. Beyond safety there are many ways in which the data from OMS along with the capabilities of AI to highlight patterns can be used enhance the in-cabin user experience and the wider community. Beyond the in-cabin experience the impact on the wider community will be discussed by the closing panel: “How do ADAS & Autonomy Impact Consumers Beyond Safety”
InCabin Europe 2024 in Barcelona this October brings together thought leaders to discuss these many shared challenges face to face.
Stay ahead of the curve in automotive innovation! Join over 9,987 engineers and specialists who receive the latest updates, insights, and discussions on ADAS and AV technology directly to their inbox. Don’t miss out! Subscribe to our newsletter today and be part of the conversation.