editor's blog
Subscribe Now

MIMO: Hardware or Software?

A while back we covered CEVA’s move to multicore for their communications-oriented XC architecture. One of the motivating elements was the complexity of requirements for features like MIMO, the ability to use more than one antenna – and multiple channels formed by the product of the number of sending and receiving antennas. They say that using a software approach provides the flexibility needed for the variety of options, that there are too many differences between options to implement in hardware: there would be too much unshared hardware, and it would be inefficient.

Sounds reasonable. But then came a completely separate announcement from Quantenna. They’re also doing MIMO, but in hardware. They can handle up to 4×4 MIMO (that is, 4 antennas sending, 4 receiving; 16 channels). And they say that it’s not reasonable to expect to be able to meet the performance requirements without doing it in hardware.

Both companies seem to agree on the complexity of the standards they’re implementing. The thing about such WiFi communication is that the environment is constantly changing, and you have to constantly re-evaluate which channels are working best and where to send things. This re-optimization is checked every 100 ms.

In fact, Quantenna says that, if the radar band is unpopulated, it can also be used, although they claim that most boxes don’t take advantage of this, remaining within the crowded non-radar portion, even though the radar portion has the bulk of the available bandwidth.

There is also beam-forming to be done – including “blind” beam-forming, where only one end of the channel can do it. Channel stability has to be rock solid since there’s no buffering for streaming video. Equalization has to be optimized. And in a higher layer, there’s quality-of-service (QoS) for video.

And most of this isn’t established at design time; it’s a constant real-time re-jiggering of parameters to keep things working as efficiently as possible. And it has to work alongside the earlier 802.11n and below standards. And Quantenna says they can handle all of this in hardware, without blowing the silicon budget.

You can imagine that being able to do it in software might be quite convenient and space-efficient. You can also imagine that hardware would provide much higher performance. So which is best?

Rather than get into the middle of adjudicating this myself, I offer both sides the opportunity to state their cases in the comments below. And any of the rest of you that have something constructive to contribute to the discussion, please do.

Meanwhile, you can get more details on Quantenna’s announcement in their release.

Leave a Reply

featured blogs
Jul 25, 2025
Manufacturers cover themselves by saying 'Contents may settle' in fine print on the package, to which I reply, 'Pull the other one'”it's got bells on it!'...

featured paper

Agilex™ 3 vs. Certus-N2 Devices: Head-to-Head Benchmarking on 10 OpenCores Designs

Sponsored by Altera

Explore how Agilex™ 3 FPGAs deliver up to 2.4× higher performance and 30% lower power than comparable low-cost FPGAs in embedded applications. This white paper benchmarks real workloads, highlights key architectural advantages, and shows how Agilex 3 enables efficient AI, vision, and control systems with headroom to scale.

Click to read more

featured chalk talk

Power Modules and Why You Should Use Them in Your Next Power Design
In this episode of Chalk Talk, Amelia Dalton and Christine Chacko from Texas Instruments explore a variety of power module package technologies, examine the many ways that power modules can help save on total design solution cost, and the unique benefits that Texas Instruments power modules can bring to your next design.
Aug 22, 2024
43,309 views