This is a story that starts with the improbable topic of building controls – you know, those complex systems that ensure that no matter where you are in the building, it’s too damn cold. Way back in the last century, these controls were dominated by large companies with complete proprietary systems. OK, they sorta still are, but work with me here. The users of the systems were more or less captive to their controls company, and changes to the system needed by the users resulted in a nice high-profit source of consulting income to the controls company.
Building controls tend to consist of a wide variety of sensors and actuators widely dispersed throughout the building. Central control requires some way of connecting all of these elements so that controllers can measure whatever parameters they’re interested in and make changes if necessary. Thermostats are a great example, although for this situation, strictly speaking, you wouldn’t really envision an all-in-one thermostat with sensor and actuator tied together like you have in your home, but rather separate elements that could be housed in a single box. In one scenario, the temperature sensor reports to the central controller, which then tells the actuator whether to turn on fans and either cool or heat the space; in another scenario, the fan or temperature actuator queries the sensor for the temperature and then responds accordingly.
These devices are small and relatively inexpensive, so the technologies for interconnecting them have to be inexpensive, or else they’d overwhelm the cost of the device. IP was the dominant standard technology at the time, and it was just too expensive to use. In addition, the cheapest physical implementation was 10BASE-T, which uses a star configuration, requiring home runs for each node – that means lots of wire running through the walls and less flexibility for moving things around.
A similar situation existed for factory controls: numerous machines and controllers were located throughout the plant, and, unlike most building elements, the factory nodes had to be moved from time to time. Then there was the added concern that factories are electrically noisy environments, meaning that wiring has to be particularly robust – read expensive – for Ethernet.
To address these issues, a company called Echelon put together a peer-to-peer Local Operating Network (LON) platform they called LONworks, which included a complete seven-layer stack called LONtalk. In 1999 they turned it over to ANSI, which created the ANSI- (or CEA-) 709.1 standard, along with other related standards. Echelon licenses out any patents they have in the technology and apparently reserves the right to test implementations occasionally to ensure compliance. But to date, Echelon is the primary evangelist of this technology. The big controls companies have been somewhat reluctant to start using an open technology on their systems, since it loosens their grip, but gradually they are giving way since their customers like it.
Practically speaking, there are two main elements in the platform: the protocol and the processing chip that goes onto each of the nodes for connecting to the network. While the processing chips are really the embedded elements here, it makes sense to talk about the protocol first.
LONtalk is a complete stack with physical, link, network, transport, session, presentation, and application layers. In fact, in a world where most standards cover only a part of the OSI model and then are stacked over each other (TCP over IP over Ethernet, for example), it struck me as unusual to see a single bottom-to-top protocol.
There are two primary physical implementations: twisted pair and power line, although apparently there have been some fiber optic implementations. In particular, there is a Free Topology twisted pair implementation (ANSI-709.3) that allows bus, loop, star, and combinations of all of those. This provides critical flexibility for handling all kinds of topologies and rearranging things as needed. LONtalk can also tunnel over IP (ANSI-852), which could be useful for going from building to building, piggybacking on the data network infrastructure.
LONtalk does not provide a fast network. Response needs to be quick, but throughput ranges from 11-20 packets/second over power lines, to 200-700 packets/second over twisted pair, to around 10,000 packets/second over IP. Then again, this isn’t intended to be a high-speed data network; it’s essentially a control network, with occasional command-and-control messages flitting hither and yon.
Wireless has not been made an option. It was considered, but in order for that to be practical, the nodes would most likely be battery-powered (otherwise you could just use the power line), and that would mean they’d have to go to sleep between uses to save juice. Waking them up screws up latency. In a mesh environment, path discovery hurts latency. In addition, the electromagnetic environment in a building is very complex and is not constant – as bodies move around the building, they affect the signals bouncing around the room. Tuning for enough robustness to handle all the possible configurations ends up being too expensive, and, even so, you end up clustering small groups of wireless nodes into a hub that patches them into the wired network, so there’s only nominal benefit. Things are more predictable in homes, but they didn’t want to do a separate PHY format just for the home, so the wireless approach was discarded.
Interestingly, not only can you carry a signal on a power line for a node that’s plugged in, but you can also carry power on twisted pair, so that a node need have only one set of wires – power or twisted pair, and assuming you have the right controllers and transformers on the other end, you can handle both the data and power.
Moving up the stack brings us to the Link layer. This layer provides a CRC, channel access using CSMA (with channel access randomized over time slots), priority (with specific time slots allocated to priority packets and with the highest-priority packets having predictable response times), and collision avoidance. Each node is intended to be plug-and-play, allowing sensors and actuators to be added to the network without bringing everything down first.
At the Network level, each device has a unique 48-bit Neuron ID (more on Neuron in a moment). This acts as a physical address, and packets can be sent to a specific device. In addition, there is a logical hierarchy consisting of a domain, which can contain multiple subnets, each of which contains multiple nodes. The node is the logical avatar for the device; messages can be addressed to a node or a device. This structure also allows domain broadcast and subnet broadcast. Node groups can also be defined; these are independent of the hierarchy, and messages can be multicast to these groups.
The Transport layer provides three kinds of message delivery service. Most reliable is the Acknowledged message service; an end-to-end acknowledgment is required from the receiver, and retries are automatically sent until acknowledgment is received. Slightly less reliable, but faster, is the Repeated service: a message is automatically sent multiple times just to be sure it gets there. Since you don’t have to wait for acknowledgment (or a timeout), you can send messages more quickly and use less bandwidth. Apparently they’ve experienced 99.999% receipt reliability using three repeats. The fastest and least reliable service is the Unacknowledged delivery service; the message goes out and hopefully reaches its destination.
The Session layer provides a request/response service as well as authentication. If authentication is used, two nodes can share a secret 48-bit key, and when the receiving node receives a message, it can challenge the sender by sending a random 64-bit challenge, which the sender has to transform using the key and then send back to the receiver.
There is a surprising amount of standardization at the upper layers. In the Presentation layer, sensors publish their data, and actuators subscribe to the data, setting up a logical connection that allows nodes from different manufacturers to interconnect. There are over 170 different pre-defined Standard Network Variable Types (SNVTs, apparently pronounced “snivets”). As examples, several SNVTs define temperatures; there are others for speed, or a time-stamp, and the like. Each SNVT also defines the format of the data to ensure coherent discussions between sensors and actuators. All of the SNVTs have XML definitions to help other systems interpret the data.
At the Application layer, profiles are defined for more than 60 different services, such as data logging and alarming. Configuration properties are defined to specify data encoding, scaling, units, default values, range, and behavior, with XML definitions for porting the use of the data. This then allows the writing of a specific application, which takes data in, does something with it, and then sends new data out, using the prescribed formats.
Putting It Into Silicon
All of this has to be handled at each node, and Echelon designed what it calls a Neuron chip to act as the network gateway for each node. The intent was to have a low-cost solution that could affordably be integrated into each node. Echelon never actually sold the chips themselves; initially Motorola and Toshiba were the sources; since then Cypress has entered the picture, while Motorola has exited.
A Neuron chip actually contains three processors: two of them handle the network protocol, and one is available for use for the actual application. There is a ROM block that contains an implementation of the LONtalk protocol, simplifying the task of hooking up a node to the network. The chip includes processing memory and 11 general-purpose I/Os for use by the application. These are TTL levels (remember TTL?), some having as much as 20 mA pull-down capability, and some having programmable pull-ups.
It appears that the off-the-shelf chips are pretty much compatible with each other. But now FPGAs are getting into the mix. Altera recently announced joint work with Echelon, and some of Xilinx’s documentation indicates that their parts can be used in LONworks implementations. The Altera solution uses a NIOS processor in a Cyclone II or III device. Because FPGAs can’t drive the twisted pair directly, they have to go through Echelon’s FTXL Free Topology transceiver (that’s [Free Topology] transceiver, not Free [Topology transceiver]) to cover the “last inch.” Development is done through APIs and a protocol stack that Echelon provides; there are two versions, the full FTXL Developer’s Kit and the ShortStack Developer’s Kit, which supports less speed and fewer SNVTs – more appropriate for home implementations. The FTXL solution appears to be closely aligned with Altera’s technology.
While Ethernet clearly dominates a lot of networking, the ability to use cheaper wiring solutions – even the power lines themselves – to act as the network, there’s a chance that this technology can find its way beyond the factory and the corporate campus into other environments requiring simple control networking, and particularly, the home. As energy needs grow and costs drop, building intelligent homes becomes more attractive – energy monitoring is a rapidly growing application. And if that can really take off, both the traditional Neuron chip sellers and the FPGA guys will be salivating at the opportunity to come join us all in our living rooms.