feature article
Subscribe Now


Marketing or Engineering?

There’s an ancient piece of marketing wisdom that says, “If you can’t fix it, feature it.” While commonly attributed to technology, it has, in fact, been practiced since pre-historic times.

When the Bosporus broke open, flooding the area now known as the Black Sea in an event that might be the Great Flood survived by Noah and his menagerie, it was presumably a catastrophe of immense proportions. Until some developers took a look at the new shoreline property on the Crimea and said, “Dude, these are some killer beaches!”

When the walls on the Coliseum in Rome fell apart, did they repair them? Hell no, they turned it into a historical landmark and made far more money than they would have just feeding the ne’er-do-wells-du-jour to felines. (Gladiators On Ice anyone?)

But any engineer worth his or her salt will cock a somewhat disapproving eyebrow when cashing the immense checks that result from such marketing shenanigans. There’s something slightly distasteful about profiting from sleight of hand. (OK, not that distasteful…) Rather than protesting and sending the money back, the solution is to look askance at any idea coming from marketing, immediately seizing the moral high ground. Such ideas will generally fall into two categories, depending on how you cut them. Along one axis, they can be divided into Lies and Deceptions. Along the other, they can be divided into Ones That Are Stupid But Will Make Money and Ones That Are Just Stupid.

Generally, it’s the Money/No Money axis that determines whether the idea will be implemented. However, regardless of the category, the idea will be uniformly viewed with scorn. It’s simply a matter of whether that scorn is accompanied by a profit-sharing check. (For those of you too young to know what that is, a long time ago in a valley south of San Francisco, companies made profits. And then – get this – they actually shared those profits with employees. I know… quaint…)

A corollary of this is that any idea that doesn’t immediately ring accurate with respect to engineering rigor will be viewed as a likely marketing ploy. And with the zeal of a hacker who has been challenged to break into the Klingons’ innermost defense network, they will poke and pry to figure out just where the deceit lies.

An example of this arose with the mention by Mocana that “zero-threading” was a feature of some of their routines. Let’s face it: vintage code uses one thread; it’s just that, before the concept of threads was developed, it wasn’t thought of as a thread; it was just a program. Over time we learned to manage more than one thread. Zero threads, on the other hand, would sound like zero code. Which doesn’t sound particularly useful. Except, perhaps, to a marketing guy.

So doing what any normal person would do when confronted with a novel concept, I googled it. And the primary relevant hit took me to a forum* where someone asked the question, “What the heck is zero-threading anyway?”

And the answer that came back was the ultimate put-down of a product feature: it’s just something made up by the marketing guys to sell product.


But then again, being the intrepid journalist, my blood flashed hot at the prospect of a major industry controversy to be exposed, raising me to the exalted levels of a Woodward or Bernstein. Names would be named, careers would be killed, captains of industry would be led away in chains, red faces bowed or covered in towels. I would never work again, I would die in poverty, but I’d have had my fifteen minutes of fame, and all without putting my actual life at risk by doing something incredibly inane on a YouTube video.

With a damning accusation in hand, I gleefully went back to Mocana suggesting that someone had suggested they were simply foisting a non-existent marketing fabrication on a trusting [cough] and unsuspecting engineering audience. I sat back, rubbing my hands in glee, previsualizing the Pulitzer Prize awaiting me when the impact of this discovery was acknowledged and a new Pulitzer was devised for Best Exposure Of Yet Another Marketing Fraud.

The violins came screeching to a halt, and I deferred clearing space on the mantle for the award statue, as I received a somewhat more cogent explanation of zero-threading than would have been expected, given the immediate disdainful dismissal it had received on that forum.

Here’s the deal: it’s not that the code uses zero threads; it’s that it creates zero threads. It borrows existing threads. And what does that mean, exactly?

Well, ordinarily, in a typical non-zero-threaded approach, an interrupt requesting execution of some routine will cause a new thread to be spawned by the operating system; the code will be executed in this thread.

Armed with a new thread, the operating system takes ownership of managing the overall system, including that thread, swapping it in and out as it deems appropriate for the smooth and timely execution of all the code. It can schedule the thread on whichever core it decides is suitable and automatically handles the context swap required to suspend one thread and resume another.

That context swap might mean a wholesale transfer of much state information from the processor into memory for later retrieval, or it might mean the transfer of less information if the core can be hyperthreaded and maintain several contexts at once. Regardless, the operating system handles all of that, and the coder doesn’t have to. He or she simply starts a thread and watches in amazement as it does its thing. Just ask any programmer; writing threaded programs is really simple. [cough]

But there’s trouble in Paradise. First of all, the more threads an operating system has to manage, the more it can get bogged down. This might not be apparent in a large system with lots of compute power and lots of RAM and lots of disk space, but in a small embedded system it can start to be an issue.

Things get even dicier when you’re in a really compact embedded system that is running some minimal operating system or even on bare metal. Then you have no off-the-shelf mechanism for handling the thread management.

Here’s where zero-threading comes in. The writers of these zero-threaded programs take ownership of “thread” management themselves, rather than delegating it to the operating system. They “borrow” an existing thread by essentially inserting themselves, explicitly swapping out state information – and potentially only that state information they know they will impact, which might be less than a full swap – and starting their own code. They then replace their divots, so to speak, when either suspending or finishing execution by putting that saved state information back in place.

When writing a program this way, you have to do everything yourself, so you have to be completely certain that you’ve saved and restored all state correctly. And some of the more standard operating system mechanisms, such as are used for protecting against data race conditions between threads, won’t be available. It’s much more work. The anticipated benefit is that it’s less work for the operating system and overall quicker and more efficient when executing.

You might envision the swapping playing out in a couple of scenarios. One is that an interrupt happens and launches execution of a routine in the processor. The thread is borrowed, the code executes (either in one go or in increments punctuated by some other code), and then the thread is returned.

Alternatively, if the routine to be executed actually has some kind of accelerator or coprocessor associated with it, then the initial interrupt could borrow the thread, use it to launch the routine on the accelerator, and then return the thread while the accelerator does its thing. The routine is then being run on a non-blocking basis. When the accelerator is done, it can interrupt again if necessary, at which point the thread is borrowed to handle the result of whatever was done before handing the thread back and sitting back in satisfaction of a job well done on the back of someone else’s hard-earned thread. (You’ll notice the marketing skill here in calling it “zero-threading” as opposed to “thread parasitization.”)

So, as it turns out, there’s no big exposé here. No fifteen minutes of fame, no fortune. Definitely no Pulitzer. We’ll let marketing guys fight about whether it’s an important feature if they want, but it’s clear that zero-threading is not some concept constructed out of whole cloth solely for the adornment of a datasheet. An engineer actually has to do work to implement it. Which should give it some legitimacy in the eyes of fellow engineers.

[Full disclosure: the author is himself a marketing puke and therefore is at full liberties to “take the piss,” to borrow a phrase from o’er the pond]

*The forum discussion no longer seems to show up in the first page of search returns or else I’d point to it… The question is paraphrased above… But I swear it was really there… I’d stake my marketing credibility on it… oh, wait…

Leave a Reply

featured blogs
Dec 4, 2023
The OrCAD X and Allegro X 23.1 release comes with a brand-new content delivery application called Cadence Doc Assistant, shortened to Doc Assistant, the next-gen app for content searching, navigation, and presentation. Doc Assistant, with its simplified content classification...
Nov 27, 2023
See how we're harnessing generative AI throughout our suite of EDA tools with Synopsys.AI Copilot, the world's first GenAI capability for chip design.The post Meet Synopsys.ai Copilot, Industry's First GenAI Capability for Chip Design appeared first on Chip Design....
Nov 6, 2023
Suffice it to say that everyone and everything in these images was shot in-camera underwater, and that the results truly are haunting....

featured video

Dramatically Improve PPA and Productivity with Generative AI

Sponsored by Cadence Design Systems

Discover how you can quickly optimize flows for many blocks concurrently and use that knowledge for your next design. The Cadence Cerebrus Intelligent Chip Explorer is a revolutionary, AI-driven, automated approach to chip design flow optimization. Block engineers specify the design goals, and generative AI features within Cadence Cerebrus Explorer will intelligently optimize the design to meet the power, performance, and area (PPA) goals in a completely automated way.

Click here for more information

featured paper

Power and Performance Analysis of FIR Filters and FFTs on Intel Agilex® 7 FPGAs

Sponsored by Intel

Learn about the Future of Intel Programmable Solutions Group at intel.com/leap. The power and performance efficiency of digital signal processing (DSP) workloads play a significant role in the evolution of modern-day technology. Compare benchmarks of finite impulse response (FIR) filters and fast Fourier transform (FFT) designs on Intel Agilex® 7 FPGAs to publicly available results from AMD’s Versal* FPGAs and artificial intelligence engines.

Read more

featured chalk talk

Digi XBee 3 Global Cellular Solutions
Sponsored by Mouser Electronics and Digi
Adding cellular capabilities to your next design can be a complicated, time consuming process. In this episode of Chalk Talk, Amelia Dalton and Alec Jahnke from Digi chat about how Digi XBee Global Cellular Solutions can help you navigate the complexities of adding cellular connectivity to your next design. They investigate how the Digi XBee software can help you monitor and manage your connected devices and how the Digi Xbee 3 cellular ecosystem can help future proof your next design.
Nov 6, 2023