posted by Bryon Moyer
The Multicore Association has released the latest of its multicore management APIs. The first such API they released was MCAPI, which allows data to be communicated throughout a potentially complex heterogeneous embedded multicore environment. The next was MRAPI, which deals with the management of resources, allowing virtual extension of scope beyond what an OS would provide in a single process.
This time it’s MTAPI, for managing tasks. Now… you may ask, “Why do we need yet another task-management capability when we have pthreads and OpenMP and MPI?” There are a couple of reasons:
- Pthreads and OpenMP only work within a given process and/or assume a level of homogeneity. Heterogeneous AMP systems can’t use them. In other words, you can’t invoke a task on some different-ISA core that’s managed by a completely different instance of an OS (or no OS at all).
- MPI is far too heavyweight for embedded applications having thousands or more tasks to manage.
While you might think extending something like pthreads to cross process boundaries courtesy of a quiet little runtime might be straightforward to the programmer, MTAPI has, in fact, introduced some abstract notions with which I had to struggle a bit for understanding. There’s no overall high-level description of the relationships with examples, so I sort of pieced it together by reading the various bits of the standard and deciding I think I knew what was going on. (Danger!)
So here’s my take on it. We’re used to simply invoking a task (typically a thread). But on complex systems, there may be any number of different “candidates” for implementing that task.
- You may have multiple cores, each of which has a function that can implement the task.
- These cores may or may not be the same – one may be a CPU, the other may be a DSP.
- You may have dedicated hardware for implementing the function.
- You may have a mix of cores and dedicated hardware accelerators, any of which could be chosen for a given execution.
So they’ve included an extra layer of abstraction, yielding three different notions:
- An action is the “potential” for executing a task. Let’s say CRC generation is something you need to have done, and you have one CRC accelerator and four different cores, each of which has a “GenerateCRC” function. The accelerator and GenerateCRC functions are all actions. The software versions are registered with their local MTAPI runtimes; the hardware versions are built into the system. Each of these is a candidate with the potential for executing a specific run with a specific set of data.
- A job is an abstraction of all of the different available actions for a given thing that needs to be done. So you might have one “CRC_job” representing the five different ways of generating a CRC. This supports the use of queues or load balancing. When you actually need to get a CRC, you don’t call one of the specific action functions/hardware; you call the job, and the system decides which action gets chosen to run the specific instance.
- A task is a specific instance or execution of… something that needs to be done. (It’s really hard to describe this stuff casually without using words like “task” and “job,” which have specific meanings in this context… it’s why your head can end up spinning.) The task is the specific call you make when actually running; it makes reference to a job and, via the job, gets assigned to one of the actions tied to the job for execution (say, a software implementation on one of the cores). It can be cancelled while running; it can also be run as blocking (non-blocking is assumed as the typical usage). It can also be “detached,” meaning that it “floats free” and is no longer accessible by the calling code, in which case it can’t be cancelled or configured as blocking – it strikes me as similar to a terminal thread. Tasks can also be grouped, with an entire group acting as a blocking mechanism.
The other aspects of MTAPI struck me as more accessible. It covers how the details of this are handled, as well as such aspects as whether or not memory is shared, parameter- and result-passing, status checking, and the like.
posted by Bryon Moyer
We hear stories of a not-so-distant future when we can wave our tricorder-like devices around and detect all kinds of substances that might be in the air. One of the ways sensors like this can work is by having a resonating body: when a substance adsorbs on the surface, it changes the mass, thereby changing the resonance frequency.
The problem is, however, that temperature also affects the frequency, and it’s actually pretty hard to calibrate that out of the system. Using a reference resonator or a complex software algorithm is possible, but, according to a team from Cambridge, Universities of Sheffield, Bolton, and Manchester in the UK, and Kyung Hee University in Korea, it makes things more complex and/or costly.
They’ve come up with a way of teasing the loading and temperature effects apart. It involves a two-layer structure: 2 µm of ZnO over 2 µm of SiO2. When they get this vibrating, they see two modes:
- One with a fundamental frequency at 754 MHz and harmonics at 2.26 and 3.77 GHz
- One with a fundamental frequency at 1.44 GHz, and the next harmonic at 4.34 GHz
The first mode comes from the resonance of the combined ZnO/SiO2 structure; its half-wavelength relates to the combined 4-µm thickness of the overall structure. The second mode results from the ZnO layer by itself, with a half-wavelength driven by the 2-µm thickness of this layer, although it’s also affected by the SiO2 load.
Both ZnO and SiO2 have positive coefficients of thermal expansion (CTE), so both layers get thicker as temperature goes up. But the longitudinal wave velocity goes up for SiO2 and down for ZnO. As a result, the frequencies move in opposite directions as temperature changes: roughly 79.5 ppm/K for SiO2 and -7 ppm/K for ZnO.
Given those as base numbers, it now becomes possible to deconvolve the temperature and loading effects of whatever it is you’re trying to sense.
This was, of course, a university project, although it looks like they will be open to commercializing it. You can get more details in the full paper, but it’s behind a paywall (actually, several; you can Google “Dual-mode thin film bulk acoustic wave resonators for parallel sensing of temperature and mass loading” and pick your favorite one).
posted by Bryon Moyer
All of the major EDA companies have had IP. Synopsys started with DesignWare before IP was a real concept; Mentor had IP associated with consulting for several years; Cadence has made a couple of acquisitions – notably memory – to bolster its internal IP efforts.
But the early products of these groups were typically lower-level IP – particularly I/O protocols. Not having to plough through hundreds of pages of a complex protocol spec was an attractive thing – assuming you were willing to trust your vendor to get it right or you had some way of verifying it without having to learn it yourself. And assuming you were willing to pay for IP (not a given in the early days).
Meanwhile, increasingly sophisticated IP from IP companies increasingly requires an accompanying tool to help configure the IP and integrate it with the rest of the design.
So we’ve had tools companies making IP; IP companies making tools.
And the IP part of the EDA play has become far more visible, almost holding its own against the tools themselves. And more and more, a robust IP portfolio is seen as including a processor. Intel/AMD and ARM obviously have their own well-established franchises (of which only ARM is an IP play), but there have been a few other notable players. MIPS was a recognizable contender, even if it never managed to outpace its archrival ARM; it was recently gobbled up by Imagination Technologies.
There has remained another processor company still duking it out with its own unique story. Tensilica promised a configurable processor. In essence, you told the tools what you wanted to do, and, in the end, you got a processor tailored for your application. And the software tools to compile to it.
Well, Tensilica is now betrothed to Cadence. My colleague Jim Turley noted the parallel to Synopsys’s purchase of ARC. Didn’t notice that one? Yeah, that’s because Synopsys bought Virage bought ARC*. And ARC also boasted configurability, and had a focus on the audio business – a space that Tensilica has participated in.
So we have the continued agglomeration of IP and EDA together. Dominated by ARM, followed by Two of the Big Three EDA guys. And Imagination.
Now we have more tools guys making IP; fewer IP guys making tools.
Some discussion today with Cadence reveals a bit more nuance. There’s debate as to whether a Tensilica core really competes with an ARM core. The customizability of the Tensilica core for very specific vertical applications means it gets more deeply embedded, often running with no OS at all. In fact, it is marketed as a data plane, suggesting the need for an accompanying control plane host. Some even say it’s an alternative to RTL, not an alternative to ARM.
Meanwhile, Cadence is promoting a theme of “next-generation IP” that, at first blush, sounds just like what IP has always been. The concept is that you can’t really count on shrink-wrapped IP that’s reusable by all comers. Each customer is going to want specific changes and adaptations that no one else may want (or that they want to keep to themselves).
This has always been the case – to the point where the on-the-shelf IP has historically been a teaser to engage in a consulting contract to give the customer what they actually want. So why is this suddenly a “next generation” thing?
The difference is that this customization is intended to be automated. In other words, the IP base is built with numerous knobs, and a tool accomplishes the customization per the knobs that the customer wants tuned. In fact, it intentionally starts to look more like a tool – a circuit generator – than a piece of circuitry. This has obviously been happening already here and there; it was central to what Tensilica was doing.
So we’ll have a tool guy making IP that looks like a tool.
You can see more about the merger in their release…
*And ARC bought Teja Technologies, an event that rendered This Then-Intrepid-Marketing-Exec to explore an opportunity to be This Intrepid Reporter…