The choice of hardware-interconnection mechanisms among processor blocks in an SOC affects communication performance and silicon cost. The default on-chip communications choice for most ASIC and SOC design teams is the global bus or a bus hierarchy, however this choice automatically incurs many performance and design problems.
There are other choices that may be more appropriate for today’s nanometer ASIC and SOC designs. These choices match well with communications concepts frequently used by software developers. For example, message-passing software communications have a natural correspondence to data queues. Message passing can be implemented using other types of hardware such as bus-based hardware with global memory. Similarly, the shared-memory software-communications mode has a natural correspondence to bus-based hardware, but shared-memory protocols can be physically implemented even when no globally accessible physical memory exists. Flexibility when implementing on-chip communications allows chip designers to implement a spectrum of different task-to-task connections in ways that optimize performance, power, and cost.
This white paper provides short descriptions of the most common hardware mechanisms—buses, direct connections, and data queues—used to interconnect processor cores on ASICs and SOCs. Except where explicitly noted, this paper assumes a one-to-one correspondence between tasks and processors. In fact, multiple tasks can be mapped onto one time-sliced processor and some tasks can be implemented by other non-programmable hardware accelerator blocks.
Limiting on-chip communications to global buses and bus hierarchies needlessly restricts on-chip communications bandwidth and increases the design effort needed to achieve all of the project’s bandwidth and latency goals. A broader view of the available communications techniques that work well between processors and between processors and other RTL blocks will help ASIC and SOC design teams create cost-effective designs in less time, with less effort, and with a lower risk of system failure.