How can in-cabin systems move beyond single-sensor detection toward robust, real-time understanding of occupant state and behaviour? This session explores the complementary roles of radar and vision-based sensing, from core detection and classification through to more advanced analysis of occupant activity and intent.
It will examine how multi-modal sensor fusion can enhance reliability across varied conditions, improving performance where individual modalities fall short, such as low light, occlusion, or complex cabin dynamics. Alongside this, the session will unpack the advantages and trade-offs of each sensing approach, including cost, compute, and integration complexity, as well as the practical challenges of deploying fused systems at scale.
Drawing on real-world implementations, it will demonstrate how these technologies perform in practice, offering insight into the current state of in-cabin perception and what is required to move toward more consistent, system-level intelligence.


By engineers, for engineers: A technically grounded guide to the rapidly evolving in-cabin technology industry and companies.