editor's blog
Subscribe Now

Lightweight Embedded Multicore Task Management

The Multicore Association has released the latest of its multicore management APIs. The first such API they released was MCAPI, which allows data to be communicated throughout a potentially complex heterogeneous embedded multicore environment. The next was MRAPI, which deals with the management of resources, allowing virtual extension of scope beyond what an OS would provide in a single process.

This time it’s MTAPI, for managing tasks. Now… you may ask, “Why do we need yet another task-management capability when we have pthreads and OpenMP and MPI?” There are a couple of reasons:

  • Pthreads and OpenMP only work within a given process and/or assume a level of homogeneity. Heterogeneous AMP systems can’t use them. In other words, you can’t invoke a task on some different-ISA core that’s managed by a completely different instance of an OS (or no OS at all).
  • MPI is far too heavyweight for embedded applications having thousands or more tasks to manage.

While you might think extending something like pthreads to cross process boundaries courtesy of a quiet little runtime might be straightforward to the programmer, MTAPI has, in fact, introduced some abstract notions with which I had to struggle a bit for understanding. There’s no overall high-level description of the relationships with examples, so I sort of pieced it together by reading the various bits of the standard and deciding I think I knew what was going on. (Danger!)

So here’s my take on it. We’re used to simply invoking a task (typically a thread). But on complex systems, there may be any number of different “candidates” for implementing that task.

  • You may have multiple cores, each of which has a function that can implement the task.
  • These cores may or may not be the same – one may be a CPU, the other may be a DSP.
  • You may have dedicated hardware for implementing the function.
  • You may have a mix of cores and dedicated hardware accelerators, any of which could be chosen for a given execution.

So they’ve included an extra layer of abstraction, yielding three different notions:

  • An action is the “potential” for executing a task. Let’s say CRC generation is something you need to have done, and you have one CRC accelerator and four different cores, each of which has a “GenerateCRC” function. The accelerator and GenerateCRC functions are all actions. The software versions are registered with their local MTAPI runtimes; the hardware versions are built into the system. Each of these is a candidate with the potential for executing a specific run with a specific set of data.
  • A job is an abstraction of all of the different available actions for a given thing that needs to be done. So you might have one “CRC_job” representing the five different ways of generating a CRC. This supports the use of queues or load balancing. When you actually need to get a CRC, you don’t call one of the specific action functions/hardware; you call the job, and the system decides which action gets chosen to run the specific instance.
  • A task is a specific instance or execution of… something that needs to be done. (It’s really hard to describe this stuff casually without using words like “task” and “job,” which have specific meanings in this context… it’s why your head can end up spinning.) The task is the specific call you make when actually running; it makes reference to a job and, via the job, gets assigned to one of the actions tied to the job for execution (say, a software implementation on one of the cores).  It can be cancelled while running; it can also be run as blocking (non-blocking is assumed as the typical usage). It can also be “detached,” meaning that it “floats free” and is no longer accessible by the calling code, in which case it can’t be cancelled or configured as blocking – it strikes me as similar to a terminal thread. Tasks can also be grouped, with an entire group acting as a blocking mechanism.

The other aspects of MTAPI struck me as more accessible. It covers how the details of this are handled, as well as such aspects as whether or not memory is shared, parameter- and result-passing, status checking, and the like.

You can get more info from their release, and the full API (as well as a “nutshell” document) is now downloadable from the Multicore Association website.

Leave a Reply

featured blogs
Sep 21, 2021
Placing component leads accurately as per the datasheet is an important task while creating a package footprint symbol. As the pin pitch goes down, the size and location of the component lead play a... [[ Click on the title to access the full blog on the Cadence Community si...
Sep 21, 2021
Learn how our high-performance FPGA prototyping tools enable RTL debug for chip validation teams, eliminating simulation/emulation during hardware debugging. The post High Debug Productivity Is the FPGA Prototyping Game Changer: Part 1 appeared first on From Silicon To Softw...
Sep 18, 2021
Projects with a steampunk look-and-feel incorporate retro-futuristic technology and aesthetics inspired by 19th-century industrial steam-powered machinery....
Aug 5, 2021
Megh Computing's Video Analytics Solution (VAS) portfolio implements a flexible and scalable video analytics pipeline consisting of the following elements: Video Ingestion Video Transformation Object Detection and Inference Video Analytics Visualization   Because Megh's ...

featured video

Enter the InnovateFPGA Design Contest to Solve Real-World Sustainability Problems

Sponsored by Intel

The Global Environment Facility (GEF) Small Grants Programme, implemented by the U.N. Development Program, is collaborating with the #InnovateFPGA contest to support 7 funded projects that are looking for technical solutions in biodiversity, sustainable agriculture, and marine conservation. Contestants have access to the Intel® Cyclone® V SoC FPGA in the Cloud Connectivity Kit, Analog Devices plug-in boards, and Microsoft Azure IoT.

Learn more about the contest and enter here by September 30, 2021

featured paper

Seamlessly connect your world with 16 new wireless MCUs for the 2.4-GHz and Sub-1-GHz bands

Sponsored by Texas Instruments

Low-power wireless microcontroller (MCU) shipments are expected to double over the next four years to more than 4 billion units. This massive influx of MCUs will result in more opportunities for wireless connectivity than ever before, with growth across a wide range of applications and technologies. With the addition of 16 new wireless connectivity devices, we are empowering you to innovate, scale and accelerate the deployment of wireless connectivity – no matter what or how you are connecting.

Click to read more

featured chalk talk

RF Interconnect for Automotive Applications

Sponsored by Mouser Electronics and Amphenol RF

Modern and future automotive systems will put enormous demands on RF. We need reliable, high-bandwidth, low-latency, secure wireless connections between cars and infrastructure, from car to car, and within systems on each car. In this episode of Chalk Talk, Amelia Dalton chats with Owen Barthelmes and Kelly Freeman of Amphenol RF to talk about interconnects for these new, challenging automotive RF systems.

Click here for more information