editor's blog
Subscribe Now

Making the Line Move Faster

No one likes standing in line, but if you’re going to be doing any serious parallel processing, you’ll run into many queues as a way for threads or processes to send messages to each other. Usually they’re implemented in software, which adds a level of overhead to the programs, in particular when putting things on or taking them off the queue.

For instance, when putting a new item into the queue, you have to check to see if there is room in the queue, and you have to upgrade the head afterwards, and, by the time all is said and done, you’ve chewed up 10 instructions. In addition, the queue information may or may not be in the cache, and you have to ensure exclusive access where critical, and, well, it can get slow enough to where you want to limit how many things work in parallel and how much they communicate. If the program is split up too finely, then too much time is lost to the not-so-instant messaging between processes or threads

In order to alleviate this, hardware queues have been proposed. But these interrupt the micro-architecture and require custom interconnect. Worse yet, they’re not well handled by operating systems, especially when it comes to context switches. And architects have the unenviable job of deciding how big to make them and how many to provide: you know they’ll never get that right in everyone’s eyes.

In one of those random encounters of something interesting, I ran across a recent report by Lee et al from the University of North Carolina that takes a middle road for single-producer/single-consumer queues: hardware-accelerated software queues. In so doing, there were four primary criteria they wanted to meet:

  1. The frequent enqueue and dequeue tasks have to be as efficient as possible. Hardware queues meet this; software queues don’t.
  2. The time between one entity putting something on the queue and something else taking it off should be as short as possible. Again, hardware queues can do this; software queues, not so much.
  3. They should be as easy to program as software queues (quantity, synchronization, etc.). Software queues meet this by definition; hardware queues don’t at the very least by virtue of their limited quantity and size.
  4. They have to work without changing the OS. Because software queues work in the application memory space, they can do this; hardware queues can’t.

The abridged version of what they do can be summarized by four primary points:

  • While still implementing the queue in memory, cache a local copy of the queue head, tail, and size in a separate dedicated hardware table. This table stores some number of active queues much the way a memory cache stores some number of active memory addresses. This means that the various bookkeeping steps can be done without going to memory and without contention from anyone else.
  • Pipeline the queue operations into three steps: address generation, the actual store or load, and index updating.
  • Use dedicated hardware to calculate the address of the store or load. Again, this happens in private hardware without interference; multiple addresses can be generated in a single operation. The actual load or store can happen when the address is ready, meaning that the queue operations may happen out of order (if an early one takes longer to have its address resolved, for instance).
  • Accumulate index updates and store a bunch of them at the same time to reduce the amount of access to the cache.

Of course, as with everything, the details hide much devilry, so they have special considerations to handle misspeculation, precise interrupts, a full queue cache, fence (memory barrier) instructions, and avoiding livelock (since, in theory, the various queue indices could reside on three different memory pages that an OS may not be able to keep in memory at the same time).

With this solution, Criterion 1 is met because of the hardware acceleration, as is Criterion 2. Because the actual queues are still implemented in the application memory space, with no specific size or quantity limits, Criteria 3 and 4 are met.

For details on all of this as well as the results of their testing, you can check out the paper courtesy of James Tuck, one of the authors.

Leave a Reply

featured blogs
Sep 16, 2021
I was quite happy with the static platform I'd created for my pseudo robot heads, and then some mad impetuous fool suggested servos. Oh no! Here we go again......
Sep 16, 2021
CadenceLIVE, Cadence's annual user conference, has been a great platform for Cadence technology users, developers, and industry experts to connect, share ideas and best practices solve design... [[ Click on the title to access the full blog on the Cadence Community site. ]]...
Sep 15, 2021
Learn how chiplets form the basis of multi-die HPC processor architectures, fueling modern HPC applications and scaling performance & power beyond Moore's Law. The post What's Driving the Demand for Chiplets? appeared first on From Silicon To Software....
Aug 5, 2021
Megh Computing's Video Analytics Solution (VAS) portfolio implements a flexible and scalable video analytics pipeline consisting of the following elements: Video Ingestion Video Transformation Object Detection and Inference Video Analytics Visualization   Because Megh's ...

featured video

Maxim Integrated is now part of Analog Devices

Sponsored by Maxim Integrated (now part of Analog Devices)

What if we didn’t wait around for the amazing inventions of tomorrow – and got busy creating them today?

See What If: analog.com/Maxim

featured paper

Ultra Portable IO On The Go

Sponsored by Maxim Integrated (now part of Analog Devices)

The Go-IO programmable logic controller (PLC) reference design (MAXREFDES212) consists of multiple software configurable IOs in a compact form factor (less than 1 cubic inch) to address the needs of industrial automation, building automation, and industrial robotics. Go-IO provides design engineers with the means to rapidly create and prototype new industrial control systems before they are sourced and constructed.

Click to read more

featured chalk talk

IEC 62368-1 Overvoltage Requirements

Sponsored by Mouser Electronics and Littelfuse

Over-voltage protection is an often neglected and misunderstood part of system design. But often, otherwise well-engineered devices are brought down by over-voltage events. In this episode of Chalk Talk, Amelia Dalton chats with Todd Phillips of Littelfuse about the new IEC 623689-1 standard, what tests are included in the standard, and how the standard allows for greater safety and design flexibility.

Click here for more information about Littelfuse IEC 62368-1 Products