Engineering Trust Into the Autonomous Cabin: A Conversation on AI, Security, and What the Industry Must Do Next

As the cabin becomes human‑aware and autonomy advances, the industry must move beyond intelligent systems to trusted systems where every decision is grounded in verifiable data, secure execution, and hardware‑rooted integrity.

Pav C. Suriyanarayanan is a senior technology leader at Synopsys specializing in automotive safety, cybersecurity, and AI‑aware hardware security. She leads the strategy and deployment of secure, production-scale architectures that enable trusted AI in next-generation automotive SoCs, with a strong focus on in‑cabin and human‑aware systems.

Interview With:
Pavithra C. Suriyanarayanan                                  Compliance & Automotive Ecosystem Manager, Security IPGroup

Read Pav’s insights on AI, security, and what the industry must do next, below ⬇

1. As the industry moves toward higher levels of autonomy, what guidance would you offer on how in-cabin AI should evolve? What should we be doing differently today?

I think the most important shift is recognizing that in-cabin AI is no longer just a perception layer it’s part of the safety decision loop. As vehicles take on more control, the system’s understanding of the human becomes critical to how autonomy behaves. Whether it’s determining driver readiness, interpreting intent, or adapting system responses, those decisions now directly influence safety.

What that means is we can’t continue treating these as isolated AI models. The industry needs to design them as end-to-end trusted systems. For example, in a driver monitoring use case supporting autonomy handoff, it’s not enough for the AI to be accurate. You have to be certain the camera input hasn’t been manipulated, that the model hasn’t been altered, and that the system behaves consistently every time. That level of assurance only comes when trust is anchored at the hardware level and carried through the entire pipeline.

2. You often describe the cabin as a ‘trust boundary.’ What does that mean in practice, and how should the industry respond to that?

Calling the cabin a trust boundary is really about acknowledging that decisions derived from human understanding must be protected end-to-end. The cabin is where the system interprets the human, and those interpretations increasingly drive action.

Take something like child presence detection. That’s a life-critical function. You need confidence that the sensor input is authentic, that the detection logic hasn’t been tampered with, and that the alert mechanism cannot be bypassed. Or consider occupant classification tied to airbag deployment those decisions rely on data that must be both accurate and protected as it moves through the system.

In practice, this means securing every stage of the pipeline. From the moment data enters through a camera or sensor, through the internal movement across interconnects, and into the AI model itself, each transition has to be verifiable. That’s why hardware-rooted mechanisms like secure boot, trusted execution, and protected interfaces are so important. Trust has to be continuous, not something we check only at runtime or in software.

3. A lot of teams are still heavily focused on perception accuracy. What would you encourage them to think about as these systems move into production?

Accuracy is still important, of course, but once these systems influence decisions, integrity becomes just as critical. A highly accurate model can still produce unsafe outcomes if the data it receives is compromised or if the model itself has been modified.

If you look at driver monitoring, for example, you could have a system that performs extremely well in testing. But if someone can manipulate the camera input or if an unverified model ends up running in the field, then all that accuracy doesn’t translate into reliability.

So the shift I encourage is to think in terms of trusted perception. Can you prove that the input data is authentic? Can you prove that the model running is the one that was validated? Can you ensure that the behavior remains consistent after updates? Those questions are becoming just as important as model performance metrics.

4. Real-time performance is always a challenge in in-cabin systems. How do you approach security without compromising latency or user experience?

The key is to integrate security into the architecture rather than layering it on top. When security is treated as an additional step, it tends to introduce overhead and unpredictability, which is exactly what you want to avoid in real-time systems.

In use cases like gesture control or voice interaction, responsiveness is critical, but those are also entry points for potential attacks. Commands can be spoofed or replayed if the system isn’t protected.

What works well is when security is embedded directly in the data path. Authentication happens as the data moves, encryption is handled through dedicated hardware, and sensitive workloads run in isolated environments. That way, you’re not adding latency you’re designing the system so that protection is part of normal operation. When done correctly, security becomes almost invisible from a performance standpoint, but it significantly improves reliability.

5. Multimodal AI is becoming central to in-cabin systems. Where do you see the biggest risks or gaps today?

The biggest gap right now is that we’re getting very good at combining signals, but we’re not always paying equal attention to whether those signals can be trusted.

Multimodal systems bring together vision, audio, and vehicle data to form a more complete understanding of the occupant. But if even one of those inputs is compromised or inconsistently handled, it can introduce ambiguity into the system’s decision-making.

For example, if an audio command is injected or a visual signal is manipulated, the system could misinterpret intent, even if the fusion logic is sophisticated. That’s why securing the flow of data across all modalities is so important. You need confidence not just in individual sensors, but in how their data is preserved and aligned as it moves through the system.

When the data is trustworthy, multimodal AI becomes powerful. Without that trust, it becomes difficult to deploy safely at scale.

6. Personalization is a major focus area for OEMs. How can it be delivered while maintaining privacy and security?

Personalization is really where user experience and security intersect most directly. You’re dealing with identity, preferences, and behavioral patterns, which are inherently sensitive.

Take biometric authentication as an example. The system uses AI to recognize the driver and then applies personalized settings. That only works at scale if those biometric templates are protected, if they’re not exposed to the rest of the system, and if they can’t be extracted or spoofed.

The way to approach this is to keep both the data and the computation local and protected. Sensitive information should stay within hardware-protected environments, and access to it should be tightly controlled. When you do that, you can deliver highly personalized experiences without increasing the system’s exposure or compromising user trust.

7. Validation and explainability are ongoing challenges. How should the industry evolve its approach here?

Validation has to expand beyond the model. We need to validate the system as a whole, including where and how the model runs.

In real-world deployments, AI models don’t stay static. They evolve through updates. That introduces complexity, because behavior can change over time. So it’s not just about validating a model once it’s about ensuring that every version that runs in the field is verified, traceable, and consistent with what was approved.

That’s where having a trusted execution environment and a secure update process becomes essential. If you know the model is authentic, the environment is controlled, and updates are properly managed, then you can actually reason about system behavior. Without that foundation, explainability becomes much harder.

8. Vehicles remain in use for many years. What should the industry prioritize today to future-proof in-cabin systems?

Longevity is a defining challenge in automotive. These systems have to remain secure and functional over a decade or more, while both AI techniques and security threats evolve.

That’s why flexibility at the hardware level is so important. Systems need to support evolving cryptographic methods and be ready for long-term changes, including post-quantum considerations. They also need robust mechanisms to update software and models in a controlled and secure way.

Future-proofing isn’t about predicting exactly what will change it’s about designing systems that can adapt, while maintaining a consistent trust foundation across their entire lifecycle.

9. If you had to give the industry one guiding principle for next-generation in-cabin platforms, what would it be?

The most important principle is that intelligence and trust have to be designed together.

We’re building systems that interpret human behavior and influence how vehicles act, especially in autonomous scenarios. In that context, trust is not something you can assume it has to be engineered.

That means every part of the system, from sensor input to AI decision, has to be verifiable and protected. When you get that right, you don’t just have an intelligent cabin you have one that can be relied on.

More about Pav C Suriyanrayanan:

Pav’s work centers on integrating hardware-rooted security including root‑of‑trust, cryptographic engines, and secure interfaces directly into AI-driven silicon platforms, ensuring that data integrity, model authenticity, and system behavior can be trusted in real time. She has played a key role in delivering multiple industry-first, production-level solutions achieving compliance across global standards, including ISO/SAE 21434, ISO 26262, ISO/ PAS 8800, and emerging AI security and governance frameworks.

Pav works closely with OEMs and Tier 1 suppliers worldwide, helping translate evolving regulatory requirements into deployable architectures that meet the demands of safety-critical, intelligent systems. Her contributions have helped establish secure-by-design methodologies that align AI innovation with global compliance expectations, particularly as the industry navigates the convergence of cybersecurity, functional safety, and AI assurance.

A frequent invited speaker at leading automotive and semiconductor forums, including AutoSens InCabin USA, Pav is recognized for advancing a system-level approach that unifies AI, hardware security, and regulatory readiness. Her work continues to shape how the industry builds trusted, scalable, and future-ready in-cabin systems for autonomous mobility, including architectures designed for long lifecycle resilience and post‑quantum security evolution.

Interested in exterior vehicle sensing technology?

With a pass to InCabin USA, you’ll also get full access to our
co-located sister event, AutoSens. Find out more here >>

Enjoyed this interview? Check out more InCabin interviews on our Youtube channel ⬇
Passes0
There are no passes in your basket!
0