The increasingly complex demands of AI-powered monitoring systems in the cabin, and how we can be focusing on technical development and responsible use is a hot topic for discussion at InCabin this May. In this article, we bring together the key discussion points of current challenges, industry collaboration, and examples of the latest tech, to set the scene for discussions in Detroit.
We have gathered together opinions for expert voices, including Sam Lee, CEO and Co-Founder at Meili Technologies, Modar Alaoui, CEO at Eyeris, and Solmaz Shahmehr, Smart Eye’s Executive Vice President Automotive and Applied AI. Plus, we’ve included valuable insight from leading companies in the field – DEEP-IN-SIGHT, Emotion3D, Cipia, and Sony Depthsensing Solutions.
We asked the experts: What are the future challenges of AI and Ethics in terms of navigating the intricate landscape of DMS, and what innovative approaches can you see to tacking them?
Sam Lee, CEO & Co-Founder at Meili Technologies
Honestly, the most innovative is also the most obvious: we show, don’t tell. Anyone who has ever seen our demos knows they don’t just show an alert going off with us telling them, “oh look, it detected a health emergency”. No, in our demos, and in our user communication, we give a step-by-step walkthrough showing what kinds of data the Meili system uses, how we protect privacy and identity at every step of the process, and what kinds of events and behaviors trigger Meili response. When people can interact with a system and actually see it respond to them, it goes above and beyond not just explaining well how the system works but also why they can trust it.
The time and resources it takes to build a system which can show, not tell is worth every ounce of effort in the way it pays off in adoption.
For more on the challenges of AI for health monitoring see this presentation from Tobii, available to watch with an AutoSensPLUS subsciption.
DEEP IN SIGHT Co.,Ltd.
More functions in in-cabin monitoring systems means more handling of sensitive data. We see a new challenge will emerge from the difficulty of tracking the sensitivity of transformed data. Consider the case where an image of the driver’s face is taken and encoded by a deep-learning network into an embedding for downstream tasks. Embeddings are not human-readable but still contain enough data for specific tasks. Deep-learning models incorporate a variety of such intermediate data, the sensitivity of which is difficult to assess even for the model’s creator. This uncertainty can slow down the adoption of new technology.
DEEP IN SIGHT Co.,Ltd. will also take on the InCabin USA stage this May. check out the Preliminary Agenda.
Cipia
We can definitely see why AI and ethics are a big issue. First, there is the issue of data collection. While most consumers do not think twice when downloading image from the web, for example, most may actually be protected under intellectual property law. While, technically, training AI using public data does not degrade from the data itself, it may still be a violation of copyright and privacy. This is a challenge that is easily solvable – don’t! Our approach has always been to compensate participants in our data collection efforts for their time and right to use their data.
A second aspect we hear more and more about is discrimination by AI. When an AI system is trained on a specific group in the population it may exclude characteristics or other groups which may impact the AI’s decisions. This will result in a different level of performance with respect to a subgroup. Luckily for Cipia, we are forced by the nature of our product to be widely inclusive. Monitoring drivers and occupants worldwide means that we need data from all over the world to reflect the challenges the AI will face in real world setting. We factor in a variety of attributes such as gender, stature, skin tone, eye aperture and a variety of clothing and accessories from different cultures, to ensure that no matter where a car is driven and by whom, they will be entitled to expect the same level of performance, as we help them fight distractions or drowsiness, and bring them home safely.
For more on the key challenges of AI-based development, check out this presentation from Bosch, available to watch with an AutoSensPLUS subsciption.
Don’t miss this panel from InCabin Brussels on ‘Data privacy and transparency within driver monitoring – How do we ensure drivers Trust their DMS data is safe?’, available to watch with an InCabin Europe on-demand pass.
emotion3D
One of the main ethical issues with AI is related to privacy and transparency concerns during the training and validation of algorithms. To solve this issue, we strive to utilise as much synthetic data in our processes as possible. Another privacy-related topic is the security of drivers’ and passengers’ data. During camera-based DMS operation, the data is captured and processed on the edge within the car, meaning it never leaves the car or becomes accessible to third parties. Another ethical concern is the inherent biases present in current passive safety systems. We have been working on eliminating any bias, especially related to gender and body characteristics, through our smart-RCS project where we provide individualised adaptive restraint solutions.
We caught up with Florian Seitner, CEO, for more on what emotion3D are working on in this interview.
Check out this presentation on leveraging 3D information for automotive in-cabin analysis technologies from Emotion3D, available to watch with an InCabin Europe on-demand pass.
Modar Alaoui, CEO, Eyeris
In the context of AI-driven personalization in vehicle cabins, analyzing user data requires to the collection and processing of various information about the occupants first. This can include preferences such as music choices, temperature settings, seat adjustments, and even patterns of behavior while driving.
The ethical considerations come into play when determining the extent of data collection, the transparency in informing users about the data being gathered, and ensuring that the data is used solely for enhancing the driving experience without compromising individual privacy. Striking the right balance involves implementing robust data protection measures, obtaining clear consent from users, and providing them with control over the extent to which their data is utilized for personalization purposes. Ethical development in this area involves a careful navigation of providing tailored experiences while respecting user privacy and autonomy.
Find out more about what Eyeris have been working on in this interview with Modar at InCabin Phoenix.
We look forward to Modar joining us in Detroit this year, where he will be chairing our technical sessions on ‘Advanced AI Deployment Strategies for Driver Monitoring Systems’. Find out more on the Preliminary Agenda.
Plus, don’t miss this panel from InCabin Brussels on the ethical debate surrounding impairment detection, available to watch with an InCabin Europe on-demand pass.
So what is the latest technology we’re seeing in this field?
We catch up with Solmaz Shahmehr to find out more about Smart Eye’s LLM integration at CES, plus we look forward to some of the cutting-edge demos that we’ll be seeing in Detroit at InCabin USA this May.
For CES 2024, we knew we wanted to explore ways to combine our automotive sensing technologies, including eye tracking, facial expression analysis, emotion AI, and activity detection, with the latest advancements within generative AI. The result was the Smart Eye Emotion AI Prompt Engine, which seamlessly merges all these technologies to create an empathetic in-vehicle AI companion.
This rich sensory data captured by Smart Eye’s sensing technologies is synergized with the capabilities of large language models (LLMs), such as ChatGPT. The Smart Eye Emotion AI Prompt Engine translates sensory inputs into text prompts, enabling these LLMs to engage in meaningful, context-aware interactions with vehicle occupants. This allows for nuanced interactions with the people in the vehicle, making it more attuned to the human aspects of the occupants.
This is all channeled through an in-vehicle AI companion, that is equipping cars with the ability to not only listen, but truly understand. Enabling a dynamic interaction between the car and the people in it, it unlocks next-generation safety, comfort, and entertainment.
To us, InCabin offers a unique opportunity to meet people who are as passionate about in-cabin technologies as we are. It’s an event that really brings a niche part of the industry together, making it a hotspot for exchanging knowledge and driving innovation along with like-minded customers and partners.
We’re really looking forward to showing off our latest technologies and will be bringing a smaller version of the tech showcased in our car simulator at CES 2024. This will give visitors a sense of the wide array of use cases unlocked by our Automotive Interior Sensing AI, from improved safety to enhanced comfort and entertainment. We will also be demonstrating our AIS driver monitoring system, a complete hardware and software system tailored for small-volume vehicle manufacturers and aftermarket installation for vehicle fleets.
Outside of the Smart Eye booth, we can’t wait to see our Chief Research Officer Henrik Lind talk on stage about how 3D depth sensing can unlock even more advanced in-cabin functions, beyond the two-dimensional. Click here for more about Henrik’s presentation and the rest of the InCabin USA Preliminary Agenda.
Sony Depthsensing Solutions
We will bring a demo car including all the latest sensors and sensing solutions from Sony. These will cover exterior and interior sensing, so it is perfect that also the Detroit show again combines the two areas in one event. Besides the latest hardware to enable reliable detection, we also demo software matching the several sensors. Our idea is to provide individual safety concepts with various sensing solutions to meeting exactly the OEM needs in regards to their platforms and safety concept. For Detroit we are excited to see latest developments and how the whole industry supports safer cars to protect occupants and pedestrians.
Check out what Sony brought to InCabin Phoenix in this video interview!
Finally – why is it so important for engineers to connect on this topic?
What are the benefits of cross-industry collaboration, and how can it transform not only the way we develop AI-powered monitoring systems but also the way we respond to ethical and data-based challenges.
Sam Lee, CEO & Co-Founder at Meili
Given our focus at Meili, ethics in AI and driver monitoring, specifically, is near and dear to our hearts. I started Meili for my dad, who had severe epilepsy, so everything we build has ethics in mind as we design a product I would want my dad to be able to feel comfortable using.
Of course, given we’re a software-only product, seeing the latest in In-Cabin hardware and what sensors are being used and tested for detecting things like visuals as well as vitals information is incredibly interesting for us. I’m thrilled to be attending InCabin Detroit and to see the incredible technologies everyone else is working on. This is the most catered InCabin sensing and high tech applications event I’ve been to yet and, as a result, is incredibly helpful for connecting with clients and peers with a shared vision and mission.
To read more from Sam, check out our latest interview with her here.
We’re excited to see Sam take the stage at InCabin USA, presenting on ‘Communicating the Uses and Function of Driver Monitoring Systems to End Users’. Click here to find out what else is on the Preliminary Agenda.
DEEP-IN-SIGHT
Unlike classic computer algorithms, applications based on neural networks use statistical and probabilistic representations. Deep learning networks are practically black boxes to everyone outside the core AI engineering team. It’s hard to imagine a third-party entity can analyze and assure the ethical use of AI technology, like static analysis of source code in the conventional software development process. Engineers working on different applications must be involved in shaping a new procedure to prove the safety of the AI models we create.
emotion3D
We believe it is extremely important for engineers to connect on topics such as ethics, especially when working in challenging areas such as AI, as it should be a main consideration while performing their roles. To be able to see the real world application of their work motivates them to reach higher standards and communicating on these topics in specialised environments leads them to feel responsible and thus keep each other accountable. As the whole field moves in this direction, focusing on critical topics such as ethics, general public acceptance will also likely increase. At emotion3D, we encourage collaboration between engineers and business professionals to create a mutual understanding of their roles and challenges and create a big picture together.
For more on the increasing demands on AI watch experts come together on this panel, available to watch with an AutoSensPLUS subsciption.