posted by Bryon Moyer
Yesterday we looked at number of different ways of inspecting wafers. Such inspections can be an important part of a process that turns out high yields of high-quality chips. They serve a couple of roles in this regard.
The most obvious is that you catch faulty material early. If rework is possible, you can then rework it; if not, well, you don’t throw good processing money after bad.
But the other reason is probably more important: by looking at wafers at various monitoring points, you get a sense of how the equipment is working. The wafer results act as a proxy for machine monitoring.
So… what if you could measure the machine directly?
That’s what CyberOptics is doing using an in-situ approach that they say is complementary to wafer inspection. They create “fake” wafers outfitted with sensors and feed them into the equipment. The equipment thinks they’re normal wafers and processes them; the sensors measure selected aspects of the setup and report back wirelessly in real time.
And they claim to be the only ones that have this real-time capability. They say other approaches require manual “timestamping” of data that’s downloaded and analyzed after the processing is over. The Bluetooth connection to a nearby rolling host computer allows the data to be transmitted as its captured.
They have setups for measuring air particles; for leveling; gap measurement (used with thin-film deposition, sputtering, etc.); vibration measurement; and a “teaching” system that improves alignment.
Most recently they’ve announced new air particulate measurement platforms: a reticle version, which replaces not the wafer but the reticle in a lithography tool, and a smaller wafer version – 150 mm (6”, roughly). That last one might seem odd, since they say they’ve already got a 450-mm version, and bigger ones usually come later. But in this case, they had to reduce the size of the sensing and electronics to fit the smaller form factor.
Images courtesy CyberOptics
You can read more in their announcement.
posted by Bryon Moyer
We’ve got a number of ways of getting our devices to talk to each other. Some time back, I opined that Bluetooth Low Energy and WiFi seemed to have the edge, largely influenced by the burgeoning Internet of Things (IoT). Zigbee, meanwhile, seems to have more sway in the Smart Grid.
Well, some folks still aren’t happy with these options. There are three capabilities that are desirable, and yet none of the above standards can do all three:
- Low power (of course)
- Native IP6 support
- The ability to mesh
WiFi is the only one that handles IP-based traffic, but it loses on the power front; Bluetooth can’t mesh natively (although a mesh product has been announced overlaying Bluetooth); and Zigbee doesn’t do IP natively.
Hence the Thread protocol. It’s built over 802.15.4, the low-cost, low-power physical layer and media access control layer that underlie Zigbee and some other protocols. It handles IP6 via 6LoWPAN.
It appears to have originated out of Nest Labs (now Google), and they’ve assembled a group of other companies to promote the protocol. Most of the other names are familiar electronics guys – ARM, Freescale, Samsung, and Silicon Labs – but they also have a couple ThingMakers: Big Ass Fans (seriously) and Yale (think door locks).
Note that this isn’t about setting a standard: “promote” really is the right verb, since Thread is already shipping in Nest products. They’re going about this by putting together a certification program to ensure that all devices carrying the Thread designation pass muster. The certification program should be in place by the end of the year, with full availability early next year.
And what are the targets for Thread? Their site says, “… all sorts of products for the home.” They list specifically:
- Access control
- Climate control
- Energy management
Given that this is intended for non-technical consumers connecting Things in the home, they’ve also focused on ease-of-setup, via phone or computer or tablet.
You can find out more (and even participate) via their announcement.
posted by Bryon Moyer
If you’re building a Thing for the Things’ Internet (consumer-edition – i.e., the CIoT), then, even though you may do your heavy computing work in the Cloud, you’ll still need something to make your thing act more intelligent than the assembly of metal and plastic that it is.
Perhaps you’ll need it for management, perhaps for sensor fusion; it’s not likely to be a difficult computing challenge, but you’ll need something. To address this need, Microchip recently tossed a new PIC device into the fray: their PIC24F “GB2” family. Consistent with a growing IoT trend to integrate and make things simple, this one incorporates two critical elements for Thing computing: security and low power.
For security, they’ve built an encryption engine into the device, with one-time-programmable (OTP) key storage. Critically, the key is inaccessible to anything except the encryption engine itself, as shown in the drawing below. In fact, this version of the drawing is my Microchip-approved edit to their original drawing, which showed (for simplicity) both the key and the random number generator (RNG – note, this is true random, not pseudo-random) as also hanging off the peripheral bus, which would be a big security hole.
(Click to enlarge)
Image courtesy Microchip
Programming the key is done… programmatically. (Duh!). That is, it’s not some separate port that you plug a programmer or something into; the CPU does it.
There is an example in their documentation showing code that would do this. I assume it’s for explanatory purposes only, since, in that code, the desired key is effectively defined as a constant. If you actually used that code, stored in the Flash, then the unsecured Flash could happily dish up the key to anyone wishing to explore the memory. But it got me thinking: exactly how do you program this OTP without exposing the key in the process?
I asked Microchip, and they provided a number of scenarios. Having digested that, it seems to me that there are three considerations:
- Does every unit end up with the same key, or does each unit have its own unique key?
- Where does the key come from?
- What happens to the code that does the programming?
Let’s take those in order.
Microchip recommends that, for best security, each unit have its own key. That way any hacking (which is destructive) results only in a key to the now destroyed part. For your newbie hacker only.
One scenario with every unit having the same key occurs for folks wanting to secure boot code. The easiest way to do that is to encrypt the .hex file once with a single key and then use that image on all units. More complex approaches could allow unique keys, but, for instance, the factory would need to keep a database with key/serial number pairs in it so that, if the customer requested an updated version, it could be sent encrypted with that unit’s individual key.
Of course, if you’re a customer and have to update several units, then you’d receive one update image per unit (vs. one total if they all had the same key). And you’d have to make dang sure that the right image went in the right unit!
That moves the point of failure, of course, to that database – how secure is it? An alternative is to generate the image encryption key by encrypting the serial number (which is unique per device) with a secret factory key that only developers know. That can be calculated on the fly in the factory, eliminating the database. But, of course, it also assumes no disgruntled employees will divulge the key. Not airtight.
Coming back to the three considerations, the source of the key is important. If you store it in the Flash code, as in the example, it can be read in the clear. If you deliver it via some communication from a host PC or something similar, it is traveling in the clear, and is therefore vulnerable.
You can certainly operate that way, but Microchip recommends something different: use the RNG. That way each unit generates its own unique key, and no one knows what that key is. It simply works. This is airtight (except for the newbie hacker and his pyrrhic victory).
Finally, the programming code. Here’s the scenario: when you program the OTP key, it also sets a bit saying that the OTP has been programmed. Once set, that bit can never be cleared. So the first time the unit is powered on, it may not have a key yet, and the first thing you want to do is quick program the key. By checking for that flag, you know whether or not you need to program the key before moving on to the rest of the application.
Simple enough, but this is a one-time application. At the very least, you’ll be allocating scarce code space for a function (a few hundred bytes) that runs only once both in Flash and in RAM when executing.
If that’s an issue, another option is to have two applications: one for programming the key and the other being the main application. Store them on separate Flash pages and run them separately (so they’re not in RAM at the same time). On first power-up, run the key programming code and then erase that Flash page, destroying that code and freeing the page up for other use. Then load the main application, and off you go.
So, as you can see, there are a number of ways of handling this, some more airtight than others, and, as with anything having to do with security, you can make it as complicated as you want.
As to that other critical IoT function, low power, the chip uses their “XLP” (Xtra Low Power) technology, with Idle, Doze, Sleep, and Deep Sleep modes that monkey with what’s on or off and the clock rate. In Deep Sleep mode, it can draw as little as 40 nA.
You can get more info in their announcement.
By the bye, at the same time, they also released a new Bluetooth module, the RN4020. While they already have modules for various other Bluetooth flavors, this one supports Bluetooth Low Energy (BLE). You can find more about it their other announcement.