feature article
Subscribe Now

Distributing Data, Machine to Machine

RTI Updates Their DDS System

The Internet of Things (IoT) is all about Things talking to people and to other Things. This relationship between Things and other Things and People is vague enough that pretty much any product, from transistors to toilet paper, can be marketed as somehow helping to enable the IoT.

While that confusion suggests that some ordering of the IoT might be helpful to those trying to comprehend it (which I’ve attempted before and was originally planning to update), that very scattered nature can make intercommunication between Things a challenge.

Most of the way we’ve approached the IoT has been from a consumer-centric standpoint. Like the smart home concept. Such systems typically involve some kind of hierarchical arrangement: Things that talk to Hubs or the Cloud, on the one hand, and Computers and Phones that talk to the Cloud (and, by proxy, the Things) on the other hand. Perhaps the Phones talk to nearby Things directly, using WiFi.

In these cases, each Thing pretty much talks to one overlord or overlord proxy. The Phone Bearer is likely to be the Supreme Overlord.

The point being, most of the communication is point-to-point: specific higher-level components (like a Hub) talking with a lower-level element, likely a Thing. If two Things need to talk to each other, or if data from two Things needs to be merged, then the next-node-up would either handle that or pass the data yet further up the tree so that someThing with the right pay grade can handle it.

But let’s leave Consumerland for a moment and move into the less visible space that has been doing IoT-style stuff since way before the IoT got hip. (Hmmm… since it’s more obscure, perhaps it’s the IoT for hipsters.) That would be Machine-to-Machine (M2M) setups in factories and other industrial installations. There are those that argue that the IoT is really just the same as M2M, a theory to which I don’t really subscribe (but in which I also have no emotional investment), so I’m treating it as a subset of the IoT here.

And the fundamental nature of M2M is, at least to my eye, different from the consumer-centric IoT. Rather than focusing on Things, I’ll use the term “Node,” since these networks often involve networks of sensors and actuators that may be placed over large machines measuring various parameters in order to assess the health of the operation. The notion of a discrete Thing is less clear.

figure1.png 

Courtesy Real Time Innovations, Inc.

Of course, a factory isn’t the only environment this covers. Hospitals and automobiles are also repositories of data in a similar way. Medical equipment may indeed seem more like Things (specifically, Things that go “Ping”), but those machines really are nothing but sophisticated sensors used to measure the state of a patient.

The data sent by such machines may be used in a number of different ways.  There may be local interactions as actuators respond to changing conditions and re-optimize settings. Some data may need to be delivered hierarchically to a control room, where Homer is busy dribbling donut glaze on the control console. Other data may get stored away as analytics for future review. Or for monthly reporting. Or in ways unanticipated: this is exactly what happened with the missing Malaysia Airlines plane, where the engines were periodically sending data to the engine maker. The intent was to monitor engine health and how the engine ages, but it ended up playing a completely unexpected role in tracking the jet.

The point is that these data don’t have just one hierarchical owner in the way that most simple IoT configurations would suggest, but rather they are consumed by many different entities. Which complicates communication.

With 1-to-1 links over WiFi, things are pretty straightforward: provide an IP address and a MAC address and you now can send your messages. Yes, you need to configure this at setup, and if the destination changes, you have to reconfigure it. But many IoT networks are intended to be relatively static, and the software layered over the basic communication protocol can handle the registering of the addresses transparently to a non-tech-weenie user.

But if you have a more sophisticated network with a web of interrelationships involving which Nodes need the data from which other Nodes, this could become pretty tedious. You don’t want to broadcast data, since that will waste lots of bandwidth. Keeping a list of everyone that wants the data from a particular Node and sending individual unicast messages is also wasteful of bandwidth, although in a different way, because you’re sending the same payload over and over.

Multicast allows a single message to be sent to a specific list of recipients, providing the efficiency of a single payload (which will admittedly replicate as necessary where roads diverge to different destinations, but only as and when necessary) without bombarding every Node in the network.

But this makes the management problem even harder. Instead of having to know the IP address of your destination, now you need to know a list of IP addresses of everyone that you think wants to hear from you. And hope you got them all. The last thing you want is to hear from a disgruntled Node asking, “How come I never hear from you?”

Which is where DDS (Data Distribution Service) comes in. This protocol changes the model from one where you send notes to everyone on your Christmas list to one where, conceptually, you post what data you have to offer, and other Nodes post what they want to listen to, and lo, many matches can be made. If this sounds familiar, it should: this is a publish/subscribe model. So when a Node comes online, it can announce, “Hello, here’s who I am, and here’s what I publish, and here’s what I’d like to subscribe to. And I am addicted to data.” And lower-level modules handle the rest.

Of course, at these lower levels, the effect may well be that some entity is keeping track of who wants what and is sending the data accordingly. The good news is that you don’t have to worry about those details if you’re configuring the higher-level system. Multicast is most commonly utilized, although the systems may have enough smarts to decide whether multicast or unicast will be more efficient in some given situation.

I’ve used IP as an example here, but it appears that DDS isn’t married to a particular low-level communication protocol. So the interface modules that take the higher-level data and push it into the communication medium have to take care of such things as data marshalling so that the messages can arrive intact no matter what route or protocols were required to get them there. And again, they need to do this in a way that’s transparent at the top level.

Real Time Innovations (RTI) was one of the original developers of DDS, submitting it to the Object Management Group (OMG – no, not that OMG, lol) for standardization. The standard itself has gone through a couple of revisions, but, independently of that, RTI has productized the technology for a variety of scenarios – general, small-platform, and safety-critical as well as different transport schemes (UDP, WAN, military radio, etc.) under the Connext brand. And they recently announced their 5.1 version of Connext.

This latest release addresses, among other things, three real-world challenges faced by those building and managing M2M networks. First is dealing with scalability, and this is largely about bandwidth and the issue discussed above with respect to how messages are sent. In a less “intelligent” approach, you, in fact, use broadcast and never mind the waste. But doing that limits how far up your network can scale; at some point, it’s too clogged with what amounts to data spam to allow further growth.

So RTI has implemented “routing services” that can be more judicious about how messages are sent. This is where multicast (or selective unicast) can be decided and managed, reducing network traffic. So, for instance, if a particular bit of data is needed only locally, then the routing service will keep it from being transmitted to other networks, where it would simply waste bandwidth.

figure2.png 

Courtesy Real Time Innovations, Inc.

The second challenge they’ve addressed is the dynamic nature of many such networks: Nodes will come and go. One example they give is in a hospital: a patient may be moved around to different rooms a number of times during a stay. The equipment associated with that patient may move with the patient or be replaced in the new location (or not at all). This further complicates management of the network; in reality, the network’s response to these events should happen automatically without some IT person having to reprogram nodes when someone moves.

The routing service also keeps track of nodes as they attach, detach, or move around, modifying their tables accordingly. The discovery feature is critical to making this possible.

Finally, they’ve addressed upgrades – a non-trivial consideration given a network with many different devices of different ages made by different vendors and, critically, having different DDS versions. If you want to upgrade, then having to move the whole network in lockstep becomes an enormous chore – and any such move is going to be resisted as long as possible. And, with some networks – particularly mission-critical ones, you simply can’t shut things down and restart later.

So the 5.1 version allows incremental upgrades, supporting a network with mixed versions.

Other enhancements include pre-configured Quality of Service (QoS) profiles for common configurations, a “Turbo Mode” for optimizing data throughput, and “Auto Throttle,” which would appear to allow a subscriber to holler, “Hey hey hey hey, slow down, I’m drowning in data here!”

They claim that these enhancements are IoT enablers, but they seem largely to be M2M enablers. It’s just that M2M never gets the same headlines these days. It’s not so obvious to me that this might find its way into smart homes (although that guy in the black suit and sunglasses in the van parked on your street would probably love to be able to hit a “subscribe to ALL THE DATA” button rather than having to do old-school snooping).*

It also suggests an architecture different from what I’ve proposed for home/cloud/phone-centric consumer IoT. But, to the extent that the figures above don’t capture it, that’s a project for another time.

 

*Kidding… yes, this stuff can be secured.

 

More info:

RTI Connext DDS

OMG DDS Specification (scroll down to find a table with the various DDS-related documents)

 

One thought on “Distributing Data, Machine to Machine”

Leave a Reply

featured blogs
Apr 19, 2024
Data type conversion is a crucial aspect of programming that helps you handle data across different data types seamlessly. The SKILL language supports several data types, including integer and floating-point numbers, character strings, arrays, and a highly flexible linked lis...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...
Apr 18, 2024
See how Cisco accelerates library characterization and chip design with our cloud EDA tools, scaling access to SoC validation solutions and compute services.The post Cisco Accelerates Project Schedule by 66% Using Synopsys Cloud appeared first on Chip Design....

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadence’s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured chalk talk

Improving Chip to Chip Communication with I3C
Sponsored by Mouser Electronics and Microchip
In this episode of Chalk Talk, Amelia Dalton and Toby Sinkinson from Microchip explore the benefits of I3C. They also examine how I3C helps simplify sensor networks, provides standardization for commonly performed functions, and how you can get started using Microchips I3C modules in your next design.
Feb 19, 2024
8,348 views