In this week’s podcast, Amol Borkar from Cadence and I explore the major trends in high-performance audio and voice AI. We also investigate how the demanding new use cases in automotive are dramatically increasing audio workloads and how Tensilica HiFi iQ DSP is addressing the increasing requirements for voice AI and immersive audio.
Links for February 20, 2026
More information about HiFi iQ DSP
Cadence Tensilica HiFi iQ DSP: Powering Next-Generation Voice AI and High-Performance Audio (Video)
Click here to check out the Fish Fry Archive.
Click here to subscribe to Fish Fry via Podbean
Click here to get the Fish Fry RSS Feed
Click here to subscribe to Fish Fry via Apple Podcasts
Click here to subscribe to Fish Fry via Spotify
Amelia’s Weekly Fish Fry – Episode 670
Transcript
Amelia Dalton:
Hello everyone, and welcome to Episode 670 of Amelia’s Weekly Fish Fry, the electronic engineering podcast brought to you by EEJournal.com and written, produced, and hosted by yours truly, Amelia Dalton.
In this week’s episode, I’m joined by Amol Bokar from Cadence Design Systems. We explore the major trends in high-performance audio and voice AI, examine how demanding new automotive use cases are dramatically increasing audio workloads, and discuss how the new Tensilica Hi-Fi IQ DSP is addressing the growing requirements for voice AI and immersive audio.
So without further ado, please welcome Amol to Fish Fry.
Major Trends in High-Performance Audio and Voice AI
Amelia:
Hi Amol, thank you so much for joining me.
Amol Bokar:
Hey Amelia, nice to talk to you again. Always a pleasure having a conversation with you.
Amelia:
Always a pleasure having you on as well. So Amol, let’s start with the big picture. What major trends are you seeing these days in high-performance audio and voice AI?
Amol:
That’s a great question. In the audio space, there has always been a demand for high-quality immersive audio and best-in-class processing. But what we’re seeing over the next few years is a major shift: bringing immersive audio experiences from the home into the car.
Consumers now want a realistic, concert-like experience while sitting in traffic or going on a drive. So immersive in-car audio is a major growth area.
At the same time, voice is becoming the new keyboard. Whether you’re in your car, using a hands-free device, or walking around with your phone, voice-based navigation of user interfaces is becoming increasingly important.
And of course, AI integration is everywhere. Voice-driven AI applications are now incorporating small language models (SLMs) and other AI technologies to make interactions more immersive and realistic.
Automotive Audio Subsystems and Increasing Workloads
Amelia:
There’s clearly a lot happening with audio in cars. Can you walk us through the audio subsystem and how these new use cases are driving dramatic increases in audio workloads?
Amol:
Absolutely. Automotive audio subsystems are becoming incredibly sophisticated.
You have:
-
Spatial and 3D audio rendering
-
Audio alerts and ADAS warnings
-
Immersive audio playback
-
Active noise cancellation (ANC) to suppress engine and road noise
-
Personalized sound zones or “sound bubbles” for individual passengers
-
Advanced voice processing integrated with natural language processing
-
Improved streaming and codec support
If we look at audio playback today in mid- to high-end vehicles, you might see:
-
16 speakers
-
12-channel Dolby Atmos streams at 48 kHz
-
High-quality post-processing and mixing
In the near future, we’re looking at:
-
30+ speakers
-
96 kHz or higher super-sampled audio
-
Advanced channel rendering and optimization
-
Increased AI-enhanced processing
That translates into roughly 2x the compute performance required just for playback.
At CES 2026 in Las Vegas, I saw cars with 30 to 40 speakers inside the cabin. It was astonishing—not just the engineering feat of fitting them in, but the listening experience. It felt like sitting at a live concert.
Now consider active noise cancellation. Today’s systems may use:
-
4–6 microphones
-
4 secondary speakers
-
Classical ANC algorithms
Future systems may include:
-
8+ microphones
-
8–10 secondary speakers
-
Multi-channel ANC
-
AI-enhanced algorithms
That could mean a 2–3x workload increase.
Voice processing is evolving even more dramatically. Today, many cars have only one or two microphones and limited command vocabularies. In the future, we’ll see:
-
Larger microphone arrays
-
Advanced beamforming
-
Support for passengers throughout the cabin
-
On-device speech-to-text and text-to-speech
-
Integration with SLMs and AI inference
Here, workloads may increase anywhere from 2x to 8x.
Across all of these use cases, AI is becoming deeply integrated, driving significant growth in compute requirements.
Emerging Next-Generation Applications
Amelia:
Beyond automotive, what other next-generation applications are emerging?
Amol:
Several exciting areas.
First, deeper AI integration in voice applications. Traditional signal processing approaches are increasingly being replaced with neural network-based implementations. This enables:
-
Advanced speech-to-text and text-to-speech
-
AI agents
-
Real-time Q&A systems
-
More realistic, conversational virtual assistants
Second, the rise of small language models (SLMs). These are lightweight, targeted AI models optimized for embedded systems—often in the hundreds of millions to a few billion parameters.
SLMs are ideal for:
-
Running fully on-device
-
Operating without cloud connectivity
-
Fine-tuning for specific use cases
At CES, one of our partners demonstrated an SLM trained on an electric vehicle user manual. Instead of reading the manual, users could simply ask the car how to adjust a setting. The system performed:
-
Speech-to-text
-
Text processed by the SLM
-
Text-to-speech response
All locally.
Finally, immersive audio continues to advance with richer 360-degree surround sound, greater elevation effects, and enhanced realism—enabling more lifelike listening experiences across devices.
Introducing the Tensilica Hi-Fi IQ DSP
Amelia:
Let’s talk about your latest solution. How does the new Tensilica Hi-Fi IQ DSP address these increasing requirements?
Amol:
We designed the Tensilica Hi-Fi IQ DSP specifically for next-generation voice AI and immersive audio.
This is our sixth-generation DSP for voice and audio. It builds on prior success but introduces a new architecture to deliver significant performance gains.
Key highlights include:
-
Up to 8x increase in AI performance
-
Wider SIMD execution units
-
Up to 2x improvement in raw performance
-
40%+ improvement in many immersive audio codecs
-
Support for FP8 and BF16 for modern AI workloads
-
Approximately 25% average energy savings
-
Up to 50% energy savings for AI workloads
Equally important is our strong software ecosystem. Customers can leverage existing libraries, toolchains, and codec frameworks, accelerating time to market.
The product has been announced and is targeted for general availability in Q2 of this year.
Additional Benefits
Amelia:
What other benefits does the Hi-Fi IQ DSP offer?
Amol:
We focused on three pillars:
-
Designed for the Future
-
Built on the LX8 extensible platform
-
Optimized for next-gen audio and AI
-
2x performance improvements on many workloads
-
-
AI Configurability
-
8x AI performance boost
-
Special AI instructions and enhanced MAC units
-
Compatibility with frameworks like TFLM, ExecuTorch, and LiteRT
-
Can operate standalone or alongside NPUs
-
-
Fast Time to Market
-
Auto-vectorization support
-
Extensive codec compatibility
-
Seamless AI library integration
-
Together, these features make Hi-Fi IQ a true next-generation solution.
Frameworks and Tools for Voice AI
Amelia:
What frameworks and tools are needed to run the latest AI models for voice AI?
Amol:
The AI ecosystem evolves rapidly, so we support multiple approaches rather than locking into one.
We support four main flows:
-
NeuroWeave XNC (TVM-based compiler flow)
-
Handles quantization, graph optimization, and code generation
-
Supports PyTorch, TensorFlow, Caffe, ONNX, and others
-
-
TensorFlow Lite for Micro (TFLM)
-
Google-maintained framework
-
Optimized libraries for vectorized DSP execution
-
-
ExecuTorch
-
Meta’s framework for PyTorch models
-
-
LiteRT
-
Successor evolution of TFLM
-
Designed for larger transformer models and quantized networks
-
By supporting multiple frameworks, we make it easier for customers to bring their AI models to our DSP IP.
Off-the-Cuff Question
Amelia:
Time for your off-the-cuff question! I know you recently redid your outdoor kitchen. What fun things have you been cooking?
Amol:
I love to cook—especially steaks. My challenge was that my outdoor kitchen wasn’t usable most of the year. So I added a cover and essentially turned it into a gazebo.
Now I can barbecue, smoke meats, and even make pizza in a pizza oven year-round.
Let me know when you’re free—you should come by!
Amelia:
I love it. I will!
Closing
Amelia:
Well Amol, that’s all I have time for today. Thank you so much for joining me.
Amol:
Thank you, Amelia. Always a pleasure chatting with you. I look forward to next time.
Amelia:
And that’s all for this week’s Fish Fry! If you’d like more information about this topic, I’ve included links below the player on this week’s Fish Fry page on EEJournal.com and in the YouTube description.
Be sure to check out EEJournal on social media—we’re on Facebook, LinkedIn, BlueSky, Mastodon, and YouTube. Our YouTube channel features all kinds of great tech content, including our popular Chalk Talk webcast series and our animated series, Libby’s Lab.
Thank you for tuning in. If you know of any cool new technology—or just want to chat—send me a note at amelia@eejournal.com or post a comment on our forums for the week of February 20, 2026.
I’m Amelia Dalton, and you’ve been fried. 🔥


