Modern vehicle safety relies on restraint systems and airbags, but under certain conditions, these protective measures can pose unintended risks to occupants. To mitigate this, vehicles must accurately recognize the size, weight, and body posture of passengers at the moment of impact. A promising approach to achieving this involves sensor fusion—integrating camera and radar technologies.
Currently, cars are equipped with numerous sensors and electronic control units (ECUs), each designed to meet individual legislative requirements. This fragmented approach increases complexity and system redundancy. Our mission is to streamline future vehicle architectures by reducing the number of sensors and ECUs, integrating only two core sensing modalities: camera and radar. Through sophisticated sensor fusion, we create a comprehensive occupant model that serves as the foundation for a diverse range of applications—from advanced safety mechanisms to enhanced in-cabin comfort features.
In this presentation, we will explore the transformative potential of sensor fusion, discussing how this approach addresses critical edge cases that single-sensor systems cannot sufficiently manage. By combining biometric and radar data, we establish a more reliable and adaptive in-cabin sensing solution that supports a safer and more comfortable driving experience.