The number of options for getting from point A to point B keeps growing. It’s one of those areas where the concept of “standard” is somewhat loose, since there are so many of them you might wonder if the word even applies. Connectivity in larger embedded systems historically took advantage of backplane standards that allowed different cards to communicate with each other; smaller form factor devices often didn’t need the kind of data transfer rates that would warrant a complex protocol.
As miniaturization has shrunk erstwhile cabinets into our palms, the whole concept of plugging in boards or putting modules on baseboards starts to fall apart. But because many of these new small systems have ancestry rooted in the larger systems, they bring forward the legacy of those interconnect schemes. So while different data rates, stacks, physical media, and form factors add up to a smorgasbord menu that can be a challenge to navigate if you’re just starting from scratch, in reality, legacy and affinities tend to dictate a smaller range of options for a given system.
These options vary depending on the level at which you design. If you’re designing an SoC, obviously you’re going to be looking for IP to help cut the effort required to provide interconnect on the chip. One of the first things you’ll see after (or even before) the ratification of a new standard is the availability of IP for chip designers to implement the standard. On the other hand, if you’re designing a system, you’re going to be looking either for SoCs (or other chips that have interconnect schemes built in) or for chips and chipsets to allow you to build the interconnect in more easily. And of course, these will typically not be available right after standard approval, because someone has to build them first. Some companies may make a risk start on what they think the standard will end up being, but no one can do that so far ahead that the chip is available near the ratification date.
The issue with many of these schemes is that they’re complex, with standards that can easily command hundreds of pages. Wading through these is no mean feat, and, frankly, trying to reinvent something like that is just plain stubborn. The number of pre-designed options for interconnect is growing and is increasingly including wireless options. It’s been a busy time lately for companies trying to offer improved solutions for one or more of the protocols. So we look at some of what’s going on with the better-known wired and wireless schemes here, as well as looking at at least one new proprietary comer.
Got it wired
On the wired side, the VME family, Ethernet, PCI Express (PCIe), and Serial RapidIO (SRIO, which seems to be becoming synonymous with RapidIO, with the “serial” character being assumed) have dominated. VME continues to creep forward in its domain at a somewhat stately pace. And it’s hard to find people saying anything new about standard wired Ethernet. But PCIe and SRIO keep moving forward – particularly PCIe. Gennum credits the general market momentum behind PCIe to the not-insignificant support of a rather large player in this space (taking a parental tone, why should I pick PCIe? Because i said so…). Tundra sees it a bit differently, with each format having its own ecosystem based largely on historical alliances. So the Intel side of things tended to go towards PCIe, while the Motorola side of things tended to go for SRIO.
FPGAs have been playing in this space for a long time, with Actel, Altera, Lattice, and Xilinx offering IP cores for one or more of these standards (and even their own proprietary lightweight standards) either directly or through IP partners. But implementing a complete solution in an FPGA requires the use of a higher-end device because the PHY uses high-speed serial signaling, something not available on the less expensive chips. Gennum has introduced an alternative solution by providing a bridge from PCIe to their own local bus that can be implemented in cheaper FPGAs. The Gennum GN4124 chip handles the PCIe stuff and then hands it to the FPGA for whatever else has to be done. The FPGA can also be loaded through the 4124, simplifying the overall design and reducing chip count. Meanwhile Cadence has just announced the Q1 2009 availability of verification IP for the upcoming PCIe 3.0, which promises 8 gigatransfers/sec data rates.
USB will become easier to access for a wide variety of systems as it gets integrated into more SoCs. MIPS has implemented cores on 40- and 45-nm nodes for advanced process SoCs. Meanwhile, as the new USB 3.0 (“SuperSpeed”) specification is set to be formally introduced, Denali and Synopsys have announced SuperSpeed IP cores, Cadence and VinChip have announced verification IP for SuperSpeed USB, and VinChip also has announced a USB 3.0 device controller. The latest addition to the USB family is reputed to be able to transfer data as much as ten times faster than its predecessor (challenging FireWire?), while remaining backwards compliant with USB 2.0.
In addition, the High-Speed Inter-chip (HS-IC) addendum to USB 2.0 a year or so ago provided a higher-speed USB solution intended for chip-to-chip communication rather than plug-n-play peripherals. This can provide, among other things, a way to integrate peripherals that were once external into new systems, keeping the USB mechanism in place, but changing the physical interconnect to reflect the fact that the peripheral is now inside the system, not plugged in externally, and so reducing things down to a 2-wire connection. Chipidea (now part of MIPS) and Synopsys have both announced IP implementing HS-IC.
For the poor schlubs trying to manage more than one protocol, Tundra has been making noise about their PCI bridge solutions, including the Tsi620 bridge chip that crosses the SRIO/PCIe chasm, allowing, for example, a DSP using SRIO to talk to a PowerQUICC III using PCIe. It also has the ability to connect up to low-cost FPGAs that don’t have a SERDES, allowing FPGAs to participate in the dialog without using the most expensive families.
Cutting the cord
Wireless standards are increasingly being made available for (relatively) easy implementation. G2 Microsystems has just announced a WiFi SoC (and associated module and software) that allows a small-footprint, lower-cost implementation of WiFi. They have an eye towards much broader use of the technology than has happened to date, focusing in particular on asset tracking as an ultra-low-cost application, an area previously primarily the domain of RFID tags. They have buried all of the WiFi details in the SoC and ICON software stack, with the ability to tie in the stack through an API that consists of straightforward high-level commands. Their focus in particular is low power – they can enter and exit standby very quickly. During standby, they consume 15 µW, although their light-hearted statement that “We do nothing well” might need to undergo a bit of contemplation before going for general release…
Their solution provides what they refer to as “autonomous WiFi,” meaning that the chip can handle all of the requirements of the network without requiring a microcontroller. They have demonstrated an example of this in a remote controller, where response has to be quick. In order to save power, things shut down between events, but, ordinarily, waking up would mean having to refresh the DHCP address, something that could take too long. Instead, the WiFi chip itself executes an hourly DHCP refresh while the microcontroller sleeps so that when things wake up and have to respond, there’s always a fresh address in place.
Meanwhile, for even lower-cost designs, TI has announced a new SoC for customers using proprietary wireless mechanisms by combining their MSP430 MCU and, initially, their CC1101 RF transmitter, focusing for the time being on the sub-GHz range. This launches a so-called “platform” that, over time, will result in a family of chips combining 430 MCUs with different RF modules. There is a heavy focus on low power, with significant attention being dedicated to applications where energy harvesting is used. This appears to be a particular realm where numerous standards co-exist – and are in further development – and even so, designers still do proprietary implementations. The combination of frequency options (sub-GHz and 2.4 GHz), which can vary by geography, and stacks that can be layered over the PHYs makes for a matrix that in particular renders the concept of standard in this space somewhat silly.
The possibilities and challenges raised by wireless communication ensure that this will continue to be a vibrant area, meaning it’s not likely to settle down anytime soon. It can be hoped that of the various standards in play, a very few settle into the role of something vaguely resembling a practical standard. It would be even nicer if any such winners were chosen based on technical merit rather than solely by the expediency of who did the best sales or marketing job.