feature article
Subscribe Now

Moving Data with VME

So VME has pretty much had to toil in what might feel like obscurity as compared with the attention that the PCI derivatives and ATCA have garnered. And you might – just maybe – be forgiven for thinking that VME is an old standard that’s pretty much restricted to legacy applications. But you’d be wrong. Yes, VME’s application market has narrowed. But there is still demand, and that demand is sustained based on developments to the VMEbus standard that sprang from the VME Renaissance of 2002.

The original basic VMEbus standard has – not surprisingly, given the name – a bus for a backplane. This allows only one master at a time, leaving a lot of potential masters pawing the ground with impatience. Bandwidth is limited to about 70-80 MB/s. In an era of gigabit communications, there is an obvious issue here. The issue was addressed starting with the introduction of VXS, more formally known as VITA 41 (after the organization that formed the standard; VITA stands for VMEbus International Trade Association, a bit of information that is surprisingly unavailable on the VITA website).

The original VMEbus physical arrangement consisted of two 96-pin connectors separated by a couple of inches. VXS made use of those couple inches for adding serial gigabit channels to boost the bandwidth. There are four lanes that can be bonded together for bandwidth of about 2.5 GB/s in addition to the original bandwidth provided by the bus. They then went one step further and added a version that essentially replaces the traditional bus with additional channels for all gigabit, all the time; this is the VITA 46 standard, known as VPX. It’s actually hard to find an official in-print statement as to the bandwidth that VPX will support, but they’re trying to incorporate connectors that can handle 6.25 Gbps signaling, twice as fast as the signaling used to calculate the 20 GB/s on VXS, so couple that with the additional channels, and, well, it’s a lot of bandwidth.

The other trend that hasn’t escaped VME is the mezzanine card movement. PMCs are the official mezzanine card for VME, and a version with gigabit channels has been added as the XMC card, or VITA 42.

With VXS and VPX (as well as XMC), the focus of the standards has been mechanical and electrical; they are format-agnostic. There are sub-standards that specify formats for the gigabit channels. So, for example, VITA 41.1 defines Infiniband on VXS; 41.2 is Serial RapidIO; 41.4 is PCI-Express. Even Xilinx’s Aurora link has its own version with VITA 41.5.

Meanwhile, the world has changed, as VME has seen other standards attract some of their erstwhile adherents. Probably the biggest move has been in telecommunications. There are still some capabilities that VME hasn’t taken on – particularly hot swapping and high availability – that have made Compact PCI and ATCA the primary focus of new telecom development. Given that it would be hard to dislodge cPCI and ATCA at this point, it’s not clear that VME will even try to address this (watch them make me a liar).

Another former stronghold of VME was in industrial usage, and while they still have a play there, the availability of PCI slots in PCs that you can practically get for free with your purchase of a dozen Dunkin Donuts* has reduced the demand for VME racks, although not to the extent of erosion that telecom has seen.

So what does that leave? Well, what sector needs a robust reliable form factor that is well established and is likely to be around for the next 20 years? Government, aerospace, and military, of course. This is the loyal fan base of both classic and modern VME. And driving the capabilities of VME has been the need to move more and more data around.

This is the basis for Pentek’s announcement of a new 4207 VME/VXS board incorporating a high-speed Freescale 8641 processor, 2 GB of memory, a Xilinx Virtex FPGA, and a fabric-agnostic 72×72 gigabit crossbar switch from Mindspeed. The intent of this board is to help manage the transfer of lots of data, doing as much processing as possible (which could reduce the amount of data needing to be transferred). The processor contains, in addition to a PowerPC, two Altivec engines to help accelerate DSP calculations that are prevalent in the target application spaces. The crossbar switch allows gigabit channels to be connected in a variety of ways between the processor, FPGA, XMC, Fiberchannel interface, and VXS interface, and each of those channels can use a different data format – the crossbar doesn’t actually care what the 1s and 0s mean, it just routes them on. So you can mix Infiniband with your PCI-Express, Serial RapidIO with your Aurora.

Pentek’s Rodger Hosking credits FPGAs in particular with the rapid acceptance of many of the high-speed serial standards because of the availability of IP from the FPGA vendors that removes much of the tedium from the implementation of what are pretty complex interfaces. The higher-end Virtex devices have such elements as clock/data recovery built into the hardware; the IP makes use of that hardware, along with logic for data encoding, lane bonding, and the protocol elements of the links, allowing users to focus more of the design work on options decisions as well as the inevitable tuning and tweaking for best performance and link reliability.

Three application examples illustrate the kind of data handling that Pentek is targeting with the 4207. One is radar; long gone are the days of a single ping going out and coming back. Today’s radar consists of growing arrays of increasingly sensitive detectors picking up subtle nuances expressed in high-fidelity analog signals that are immediately transformed into tons of digital data. This data needs to be moved from the detectors into the processor that will transform all of that data into a unified picture. The 4207 can participate in the manipulation and transfer of the data.

Another area is in high-speed recording. There are a number of test environments that are exceedingly expensive, and tests need to be performed as quickly and efficiently as possible. Examples are flight and wind tunnel tests. Rather than run the risk that some limited target set of data isn’t quite right, necessitating test changes and do-overs, they just instrument the hell out of the gizmo-under-test and grab every piece of possible data, stuffing it away as fast as possible for analysis later, when they’re in a not-so-expensive setting. The test environment therefore needs to move lots and lots of data onto storage very quickly. There’s actually a dedicated Fiberchannel controller on the 4207 specifically to make it easier and faster to stream data out to a hard drive.

One other area of use is in the chaotic world of military communications, where there are multiple channels of multiple formats coming from multiple kinds of equipment from multiple branches of the armed forces, all of which need to be channeled into a command center for unified communication and decision-making. Each of these channels needs to provide access to those who should have access, deny access to those who shouldn’t, and in general restrict the data overload so that each user is faced with data relevant to the task at hand. There is also a trend towards fusing audio and visual formats, with a radio message being reinforced by a heads-up display, for example. Needless to say, lots of data moving about and being munged.

It is in this environment that VME appears to have a pretty solid standing. The crowds are smaller; it may not be Madison Square Garden, but on the other hand, the more intimate venues can often be much more fulfilling.

*The management of Techfocus Media would like to stress that this statement was made for metaphoric effect only, and that Dunkin Donuts is in no way offering free PCs with a dozen donuts. If you ask for your free PC when ordering your morning sugar fix, be prepared to see the same blank stare normally reserved for customers that order an espresso.

Leave a Reply

Moving Data with VME

There was a time when they could fill a huge stadium. They were the headliners. They were the go-to guys. And they had a good run. But, as is typical, upstarts made a grab for the spotlight, winning the attention of an audience eager for shiny new things. But this didn’t deter them, and they didn’t stop moving forward. They didn’t retreat to controversy-free PBS reunion specials. They made sure their loyal followers got what they wanted, and they kept new things coming to keep them from getting bored and looking elsewhere. It’s just that the spotlight is a fickle thing, and it has been flitting all around like a Blair Witch cameraman with Parkinson’s.

So VME has pretty much had to toil in what might feel like obscurity as compared with the attention that the PCI derivatives and ATCA have garnered. And you might – just maybe – be forgiven for thinking that VME is an old standard that’s pretty much restricted to legacy applications. But you’d be wrong. Yes, VME’s application market has narrowed. But there is still demand, and that demand is sustained based on developments to the VMEbus standard that sprang from the VME Renaissance of 2002.

The original basic VMEbus standard has – not surprisingly, given the name – a bus for a backplane. This allows only one master at a time, leaving a lot of potential masters pawing the ground with impatience. Bandwidth is limited to about 70-80 MB/s. In an era of gigabit communications, there is an obvious issue here. The issue was addressed starting with the introduction of VXS, more formally known as VITA 41 (after the organization that formed the standard; VITA stands for VMEbus International Trade Association, a bit of information that is surprisingly unavailable on the VITA website).

The original VMEbus physical arrangement consisted of two 96-pin connectors separated by a couple of inches. VXS made use of those couple inches for adding serial gigabit channels to boost the bandwidth. There are four lanes that can be bonded together for bandwidth of about 2.5 GB/s in addition to the original bandwidth provided by the bus. They then went one step further and added a version that essentially replaces the traditional bus with additional channels for all gigabit, all the time; this is the VITA 46 standard, known as VPX. It’s actually hard to find an official in-print statement as to the bandwidth that VPX will support, but they’re trying to incorporate connectors that can handle 6.25 Gbps signaling, twice as fast as the signaling used to calculate the 20 GB/s on VXS, so couple that with the additional channels, and, well, it’s a lot of bandwidth.

The other trend that hasn’t escaped VME is the mezzanine card movement. PMCs are the official mezzanine card for VME, and a version with gigabit channels has been added as the XMC card, or VITA 42.

With VXS and VPX (as well as XMC), the focus of the standards has been mechanical and electrical; they are format-agnostic. There are sub-standards that specify formats for the gigabit channels. So, for example, VITA 41.1 defines Infiniband on VXS; 41.2 is Serial RapidIO; 41.4 is PCI-Express. Even Xilinx’s Aurora link has its own version with VITA 41.5.

Meanwhile, the world has changed, as VME has seen other standards attract some of their erstwhile adherents. Probably the biggest move has been in telecommunications. There are still some capabilities that VME hasn’t taken on – particularly hot swapping and high availability – that have made Compact PCI and ATCA the primary focus of new telecom development. Given that it would be hard to dislodge cPCI and ATCA at this point, it’s not clear that VME will even try to address this (watch them make me a liar).

Another former stronghold of VME was in industrial usage, and while they still have a play there, the availability of PCI slots in PCs that you can practically get for free with your purchase of a dozen Dunkin Donuts* has reduced the demand for VME racks, although not to the extent of erosion that telecom has seen.

So what does that leave? Well, what sector needs a robust reliable form factor that is well established and is likely to be around for the next 20 years? Government, aerospace, and military, of course. This is the loyal fan base of both classic and modern VME. And driving the capabilities of VME has been the need to move more and more data around.

This is the basis for Pentek’s announcement of a new 4207 VME/VXS board incorporating a high-speed Freescale 8641 processor, 2 GB of memory, a Xilinx Virtex FPGA, and a fabric-agnostic 72×72 gigabit crossbar switch from Mindspeed. The intent of this board is to help manage the transfer of lots of data, doing as much processing as possible (which could reduce the amount of data needing to be transferred). The processor contains, in addition to a PowerPC, two Altivec engines to help accelerate DSP calculations that are prevalent in the target application spaces. The crossbar switch allows gigabit channels to be connected in a variety of ways between the processor, FPGA, XMC, Fiberchannel interface, and VXS interface, and each of those channels can use a different data format – the crossbar doesn’t actually care what the 1s and 0s mean, it just routes them on. So you can mix Infiniband with your PCI-Express, Serial RapidIO with your Aurora.

Pentek’s Rodger Hosking credits FPGAs in particular with the rapid acceptance of many of the high-speed serial standards because of the availability of IP from the FPGA vendors that removes much of the tedium from the implementation of what are pretty complex interfaces. The higher-end Virtex devices have such elements as clock/data recovery built into the hardware; the IP makes use of that hardware, along with logic for data encoding, lane bonding, and the protocol elements of the links, allowing users to focus more of the design work on options decisions as well as the inevitable tuning and tweaking for best performance and link reliability.

Three application examples illustrate the kind of data handling that Pentek is targeting with the 4207. One is radar; long gone are the days of a single ping going out and coming back. Today’s radar consists of growing arrays of increasingly sensitive detectors picking up subtle nuances expressed in high-fidelity analog signals that are immediately transformed into tons of digital data. This data needs to be moved from the detectors into the processor that will transform all of that data into a unified picture. The 4207 can participate in the manipulation and transfer of the data.

Another area is in high-speed recording. There are a number of test environments that are exceedingly expensive, and tests need to be performed as quickly and efficiently as possible. Examples are flight and wind tunnel tests. Rather than run the risk that some limited target set of data isn’t quite right, necessitating test changes and do-overs, they just instrument the hell out of the gizmo-under-test and grab every piece of possible data, stuffing it away as fast as possible for analysis later, when they’re in a not-so-expensive setting. The test environment therefore needs to move lots and lots of data onto storage very quickly. There’s actually a dedicated Fiberchannel controller on the 4207 specifically to make it easier and faster to stream data out to a hard drive.

One other area of use is in the chaotic world of military communications, where there are multiple channels of multiple formats coming from multiple kinds of equipment from multiple branches of the armed forces, all of which need to be channeled into a command center for unified communication and decision-making. Each of these channels needs to provide access to those who should have access, deny access to those who shouldn’t, and in general restrict the data overload so that each user is faced with data relevant to the task at hand. There is also a trend towards fusing audio and visual formats, with a radio message being reinforced by a heads-up display, for example. Needless to say, lots of data moving about and being munged.

It is in this environment that VME appears to have a pretty solid standing. The crowds are smaller; it may not be Madison Square Garden, but on the other hand, the more intimate venues can often be much more fulfilling.

*The management of Techfocus Media would like to stress that this statement was made for metaphoric effect only, and that Dunkin Donuts is in no way offering free PCs with a dozen donuts. If you ask for your free PC when ordering your morning sugar fix, be prepared to see the same blank stare normally reserved for customers that order an espresso.

Leave a Reply

featured blogs
Jul 20, 2024
If you are looking for great technology-related reads, here are some offerings that I cannot recommend highly enough....

featured video

How NV5, NVIDIA, and Cadence Collaboration Optimizes Data Center Efficiency, Performance, and Reliability

Sponsored by Cadence Design Systems

Deploying data centers with AI high-density workloads and ensuring they are capable for anticipated power trends requires insight. Creating a digital twin using the Cadence Reality Digital Twin Platform helped plan the deployment of current workloads and future-proof the investment. Learn about the collaboration between NV5, NVIDIA, and Cadence to optimize data center efficiency, performance, and reliability. 

Click here for more information about Cadence Data Center Solutions

featured chalk talk

Dependable Power Distribution: Supporting Fail Operational and Highly Available Systems
Sponsored by Infineon
Megatrends in automotive designs have heavily influenced the requirements needed for vehicle architectures and power distribution systems. In this episode of Chalk Talk, Amelia Dalton and Robert Pizuti from Infineon investigate the trends and new use cases required for dependable power systems and how Infineon is advancing innovation in automotive designs with their EiceDRIVER and PROFET devices.
Dec 7, 2023
29,372 views