editor's blog
Subscribe Now

Lightweight Embedded Multicore Task Management

The Multicore Association has released the latest of its multicore management APIs. The first such API they released was MCAPI, which allows data to be communicated throughout a potentially complex heterogeneous embedded multicore environment. The next was MRAPI, which deals with the management of resources, allowing virtual extension of scope beyond what an OS would provide in a single process.

This time it’s MTAPI, for managing tasks. Now… you may ask, “Why do we need yet another task-management capability when we have pthreads and OpenMP and MPI?” There are a couple of reasons:

  • Pthreads and OpenMP only work within a given process and/or assume a level of homogeneity. Heterogeneous AMP systems can’t use them. In other words, you can’t invoke a task on some different-ISA core that’s managed by a completely different instance of an OS (or no OS at all).
  • MPI is far too heavyweight for embedded applications having thousands or more tasks to manage.

While you might think extending something like pthreads to cross process boundaries courtesy of a quiet little runtime might be straightforward to the programmer, MTAPI has, in fact, introduced some abstract notions with which I had to struggle a bit for understanding. There’s no overall high-level description of the relationships with examples, so I sort of pieced it together by reading the various bits of the standard and deciding I think I knew what was going on. (Danger!)

So here’s my take on it. We’re used to simply invoking a task (typically a thread). But on complex systems, there may be any number of different “candidates” for implementing that task.

  • You may have multiple cores, each of which has a function that can implement the task.
  • These cores may or may not be the same – one may be a CPU, the other may be a DSP.
  • You may have dedicated hardware for implementing the function.
  • You may have a mix of cores and dedicated hardware accelerators, any of which could be chosen for a given execution.

So they’ve included an extra layer of abstraction, yielding three different notions:

  • An action is the “potential” for executing a task. Let’s say CRC generation is something you need to have done, and you have one CRC accelerator and four different cores, each of which has a “GenerateCRC” function. The accelerator and GenerateCRC functions are all actions. The software versions are registered with their local MTAPI runtimes; the hardware versions are built into the system. Each of these is a candidate with the potential for executing a specific run with a specific set of data.
  • A job is an abstraction of all of the different available actions for a given thing that needs to be done. So you might have one “CRC_job” representing the five different ways of generating a CRC. This supports the use of queues or load balancing. When you actually need to get a CRC, you don’t call one of the specific action functions/hardware; you call the job, and the system decides which action gets chosen to run the specific instance.
  • A task is a specific instance or execution of… something that needs to be done. (It’s really hard to describe this stuff casually without using words like “task” and “job,” which have specific meanings in this context… it’s why your head can end up spinning.) The task is the specific call you make when actually running; it makes reference to a job and, via the job, gets assigned to one of the actions tied to the job for execution (say, a software implementation on one of the cores).  It can be cancelled while running; it can also be run as blocking (non-blocking is assumed as the typical usage). It can also be “detached,” meaning that it “floats free” and is no longer accessible by the calling code, in which case it can’t be cancelled or configured as blocking – it strikes me as similar to a terminal thread. Tasks can also be grouped, with an entire group acting as a blocking mechanism.

The other aspects of MTAPI struck me as more accessible. It covers how the details of this are handled, as well as such aspects as whether or not memory is shared, parameter- and result-passing, status checking, and the like.

You can get more info from their release, and the full API (as well as a “nutshell” document) is now downloadable from the Multicore Association website.

Leave a Reply

featured blogs
Dec 8, 2023
Read the technical brief to learn about Mixed-Order Mesh Curving using Cadence Fidelity Pointwise. When performing numerical simulations on complex systems, discretization schemes are necessary for the governing equations and geometry. In computational fluid dynamics (CFD) si...
Dec 7, 2023
Explore the different memory technologies at the heart of AI SoC memory architecture and learn about the advantages of SRAM, ReRAM, MRAM, and beyond.The post The Importance of Memory Architecture for AI SoCs appeared first on Chip Design....
Nov 6, 2023
Suffice it to say that everyone and everything in these images was shot in-camera underwater, and that the results truly are haunting....

featured video

Dramatically Improve PPA and Productivity with Generative AI

Sponsored by Cadence Design Systems

Discover how you can quickly optimize flows for many blocks concurrently and use that knowledge for your next design. The Cadence Cerebrus Intelligent Chip Explorer is a revolutionary, AI-driven, automated approach to chip design flow optimization. Block engineers specify the design goals, and generative AI features within Cadence Cerebrus Explorer will intelligently optimize the design to meet the power, performance, and area (PPA) goals in a completely automated way.

Click here for more information

featured paper

Universal Verification Methodology Coverage for Bluespec RISC-V Cores

Sponsored by Synopsys

This whitepaper explains the basics of UVM functional coverage for RISC-V cores using the Google RISCV-DV open-source project, Synopsys verification solutions, and a RISC-V processor core from Bluespec.

Click to read more

featured chalk talk

The Next Generation of Switching Regulator
Sponsored by Mouser Electronics and RECOM
Power modules can bring a variety of benefits to electronic system design including reduced board space, shorter time to market and easier sourcing of materials. In this episode of Chalk Talk, Amelia Dalton and Louis Bouche from RECOM discuss the benefits of RECOM’s switching regulators, the details of their advanced 3D power packaging and how you can leverage RECOM’s expertise with your next design.
Jan 9, 2023
39,447 views