USA | Europe

Exploring automotive audio for the Software Defined Vehicle

We catch up with Jose Maria Marin, Director of Global Sales – Software Defined Audio, at Blackberry QNX to discuss the future of automotive audio for the Software Defined Vehicle. Don’t miss Jose’s session at InCabin USA this May on ‘Software Defined Audio/Role of Voice and Audio Processing in Automotive Safety and Comfort’. Find out more here.

1. Could you share more about your journey from AWS Industry Products to your current role as Director of Global Sales, Software Defined Audio, at BlackBerry QNX? How has this trajectory influenced your perspective on the automotive audio landscape? 

I have transitioned from QNX to Amazon and then back to QNX. My professional mission is to help automakers deliver what the industry has quoted as Software Defined Vehicle. In my opinion this can only be achieved by leveraging the best software practices and technologies from both the embedded and the cloud worlds, and QNX and AWS are leading the way to integrate cloud and embedded solutions, sometimes even closely joining forces to deliver co-developed solutions. I think the collaboration between these two tech companies is just scratching the surface of its potential success.  

2.  Given your extensive background in audio and acoustics, how have you witnessed the evolution of automotive audio solutions, especially in the context of the software-defined vehicle age? 

Interestingly, automotive audio is one of the domains that least consolidation has experienced over the last decade, if you compare it with cockpit screens for example, which today run all from the same SoC (System on Chip). While there is notable segmentation in audio architectures, there remains an opportunity for greater collaboration among OEM teams to establish a cohesive soundscape for vehicles. If you think about it, everything that sounds in the car, which can be grouped into voice, media, noise and alerts, can all be managed by the same software platform. However, there are only few OEMs that are thinking about it this way. One of the main reasons for this is that they still don’t look at the whole vehicle from a single SW platform perspective. Another is that the SoCs were not yet ready to support these many functions yet. This is all about to change with the next generation SoCs and Software platforms like QNX Sound.  

3. Could you highlight the primary challenges automakers face in adopting advanced audio software platforms for software-defined vehicles? Conversely, what opportunities does this present for OEMs in terms of innovation and differentiation? 

I think one of the challenges often encountered relates to organizational structures. Many OEMs have created big dependencies on Tier 1s and technology suppliers, creating a very segmented sourcing process to access the core technology, which then someone needs to integrate with the rest of the systems in the car. In many cases they also try to re-use legacy architectures to keep development costs under control. But OEMs really have now the opportunity to take a holistic approach to sound and consolidate all sources onto 1 single platform. This technology and the processors to support it were not really ready until the current generation of SoCs. Now that they are, OEMs will not need to spend all their budget in low level integration, and will be able to focus their efforts on creating valuable experiences, which is what customers are really attracted by and can differentiate the car. The opportunity for OEMs is to use sound as a story telling asset, and make it part of the DNA of the car.  

4.   You highlighted the need for platforms ready for modern CICD SW development pipelines and digital twin development in the cloud, unleashing the power of AI. How do you see AI integration shaping the future of automotive audio, and what specific benefits does it bring to the table? 

Automotive software architectures are often redesigned on each development cycle, in conjunction with specific hardware which often gets tested only once it is flashed on the target’s memory. Modern SW development allows developers to continuously develop, test and modify their software in the cloud, months before the target hardware is available, and AI can also help write that software. Besides, cars generate a lot of data, if this data is used wisely, AI can help us learn and adapt to the occupant’s needs and even predict situations like vehicle maintenance for example. Connected cars allow to update the software continuously, allowing to constantly improve the experience of the occupants. Voice assistants are just one example of how AI is already being used by customers in cars today, improving their safety by letting the driven operate the functions in the car without having to look at or touch a screen.  

5.  As we approach the InCabin USA 2024 conference, what aspects of the event are you most looking forward to? Is there a particular theme, discussion, or networking opportunity that you find especially exciting? 

I attended last year’s InCabin conference in Brussels, and I found it really interesting to see so many SW companies were trying to bring their products to the automotive space, even when they don’t originally come from this industry. The auto industry is going through a deep transformation, and SW is playing a key role in it. I am curious about how incumbent and non-automotive SW companies are perceiving what is happening and eager to learn what solutions they are proposing to improve faster SW innovation. The cabin is where OEMs interact with their customers, so I am ultimately interested in what new technologies are coming to improve the occupants experience and safety.   

Don’t miss key conversations at InCabin USA this May. Get your pass here.

2024 ADAS GUIDE

The state-of-play in today's ADAS market

With exclusive editorials from Transport Canada and SAE;  the ADAS Guide is free resource for our community. It gives a detailed overview of features in today’s road-going vehicles, categorized by OEM, alongside expert analysis.