editor's blog
Subscribe Now

MIMO: Hardware or Software?

A while back we covered CEVA’s move to multicore for their communications-oriented XC architecture. One of the motivating elements was the complexity of requirements for features like MIMO, the ability to use more than one antenna – and multiple channels formed by the product of the number of sending and receiving antennas. They say that using a software approach provides the flexibility needed for the variety of options, that there are too many differences between options to implement in hardware: there would be too much unshared hardware, and it would be inefficient.

Sounds reasonable. But then came a completely separate announcement from Quantenna. They’re also doing MIMO, but in hardware. They can handle up to 4×4 MIMO (that is, 4 antennas sending, 4 receiving; 16 channels). And they say that it’s not reasonable to expect to be able to meet the performance requirements without doing it in hardware.

Both companies seem to agree on the complexity of the standards they’re implementing. The thing about such WiFi communication is that the environment is constantly changing, and you have to constantly re-evaluate which channels are working best and where to send things. This re-optimization is checked every 100 ms.

In fact, Quantenna says that, if the radar band is unpopulated, it can also be used, although they claim that most boxes don’t take advantage of this, remaining within the crowded non-radar portion, even though the radar portion has the bulk of the available bandwidth.

There is also beam-forming to be done – including “blind” beam-forming, where only one end of the channel can do it. Channel stability has to be rock solid since there’s no buffering for streaming video. Equalization has to be optimized. And in a higher layer, there’s quality-of-service (QoS) for video.

And most of this isn’t established at design time; it’s a constant real-time re-jiggering of parameters to keep things working as efficiently as possible. And it has to work alongside the earlier 802.11n and below standards. And Quantenna says they can handle all of this in hardware, without blowing the silicon budget.

You can imagine that being able to do it in software might be quite convenient and space-efficient. You can also imagine that hardware would provide much higher performance. So which is best?

Rather than get into the middle of adjudicating this myself, I offer both sides the opportunity to state their cases in the comments below. And any of the rest of you that have something constructive to contribute to the discussion, please do.

Meanwhile, you can get more details on Quantenna’s announcement in their release.

Leave a Reply

featured blogs
Apr 24, 2026
A thought experiment in curiosity, confusion, and cosmic consequences....

featured paper

Quickly and accurately identify inter-domain leakage issues in IC designs

Sponsored by Siemens Digital Industries Software

Power domain leakage is a major IC reliability issue, often missed by traditional tools. This white paper describes challenges of identifying leakage, types of false results, and presents Siemens EDA’s Insight Analyzer. The tool proactively finds true leakage paths, filters out false positives, and helps circuit designers quickly fix risks—enabling more robust, reliable chip designs. With detailed, context-aware analysis, designers save time and improve silicon quality.

Click to read more

featured chalk talk

GaN for Humanoid Robots
Sponsored by Mouser Electronics and Infineon
In this episode of Chalk Talk, Eric Persson and Amelia Dalton explore why power is the key driver for efficient and reliable robot movements and how GaN technologies can help motor control solutions be more compact, integrated and efficient. They also investigate the role of field-oriented control in humanoid robotic applications and why the choice of a GaN power transistor can make all the difference in your next humanoid robot project!
Apr 20, 2026
7,433 views