We spoke to Carmen Pătrașcu who leads Sales and Marketing for Tobii’s interior sensing, about the world’s first single-camera DMS+OMS, now on the road.
In her role, Carmen turns technology into a story people want to buy into and plans into real programs, while ensuring partner and customer needs shape the roadmap. Her focus is always the same: bring clarity, build confidence, and turn strategy into adoption.
When you first set out to build a single-camera solution, did you realize what you were getting into? What was the hardest part early on?
We expected it to be tough, but we didn’t know just how “rewrite-the-playbook” tough. A single camera that covers both the driver and occupants meant moving from a narrow field-of-view mindset to a wide field-of-view reality, and the industry was (very) skeptical. Many OEMs had an inherent resistance to changing something that “already works,” and a lot of people believed you simply could not hit the same performance KPIs with a wider view. So, we were fighting two
battles at once: the technical one, and the “convince the world” one.
This translated into a full rethink: data infrastructure, acquisition systems, synchronization methods, camera setups, corner-case scenarios, a whole architecture revamp. We experimented with different hardware variants and data capture approaches, then brought those learnings back into the product. It was a lot of curiosity, a lot of “okay, let’s try this,” and a lot of resilience. And when it got hard (because it did), what kept us going was the belief that it would make life better for drivers and passengers, not just add another layer of annoying “monitoring.”
How did the team and the way you work evolve over these years?
The product is a mirror of the team behind it. Over the years, we have assembled a group that’s technically strong and genuinely motivated by solving real problems. To put this in perspective, we have colleagues who went to school with co-founders of OpenAI. We implemented an organizational redesign to match the stages of development, and we upgraded our processes to fully comply with automotive standards. In this world, “it works in the lab” doesn’t count. You need robustness, repeatability, and a delivery rhythm you can trust.
We also tightened the loop from “data” to “deployable software,” so we could move faster without breaking quality. That meant better tooling and validation, and stress testing both statistically and in realistic driving scenarios. It becomes a very practical kind of innovation: creative and disciplined. It can be intense, but it’s also energizing when you can feel progress week by week, and when every release gets you closer to something you can put on the road with confidence.
Four years has been a long time to build something truly new. What did those years produce?
While the market was shifting from camera systems to embedded platforms, we were building a whole new solution while constantly checking ourselves against our partner’s reality: compute limits, relevant KPIs, and custom datasets. That feedback loop saved us from building something beautiful that no one could ship.
We see continuous innovation and resilience at our core. We delivered more than 15 software releases and hundreds of thousands of lines of production code. We worked with a wide diversity of data, hundreds of subjects, and a huge number of acquisition scenarios. And the most satisfying part? Seeing it move from “this is a bold idea” to “this is real, validated, and ready.”
What is Autosense in a nutshell, why does a single-camera approach matter, and what are you most excited to push next?
We took something that felt impossible and made it simple for customers: one production-ready system that can handle both safety-driven requirements and experience-driven features, without adding complexity or cost. With a single camera, you can cover the regulated driver features (like attention, drowsiness, eyes-on-road) and sense the rest of the cabin: seat occupancy, child presence detection, posture, and body position, and more. The magic is the cost-to-value ratio: fewer components, less integration overhead, and a solution that’s
homologated and validated for production.
Safety is the foundation for everything, and we’re actively aligned with the direction of regulation and rating schemes, participating in industry forums to stay close to what’s coming next. But beyond compliance, my favorite part is the human part: the system is designed to work with people, not against them. It reduces that “why is my car nagging me?” feeling and replaces it with meaningful help.
We’ll keep driving costs down, expanding compatibility with more platforms and partners, and improving real-world performance. This also opens the door to richer occupant understanding and smarter assistance, so the car becomes a trustworthy co-pilot that supports you. That’s the future of mobility I’m excited about, one that breaks the technology barrier we are experiencing today.
If you want to see this solution in action, visit us at InCabin Detroit, booth 100.
Interested in exterior vehicle sensing technology?
With a pass to InCabin USA, you’ll also get full access to our
co-located sister event, AutoSens. Find out more here >>