posted by Bryon Moyer
This is a special one that’s going out to my home boy Jim Turley, who has a special relationship with Cloud Computing. He has a way of poking holes in one of the current darling of technology that is kind of undeniably persuasive. He makes you want to shout, as he shouts, “Testify!” even for me, who has been somewhat more optimistic about the possibilities of the cloud – and I’ve even worked for a company with a cloud-computing model (who has since pulled out of the cloud). (And I always like the emperor-has-no-clothes shouters – when they’re right, or partly right, anyway…)
Most of our cloud discussions have had to do with design tools. You know, using the cloud for peak usage and such. Which, as Jim has pointed out, feels very much like a trip back to the 70s and 80s. We have also talked about content in the cloud. But here’s a new one, as tossed out in a Wind River keynote at this week’s Multicore DevCon in Santa Clara: distributing your embedded code over the cloud. No, not like sending it to people: literally distributed computing – part of your software on your system, part in the cloud running an RTOS.
Yeah, you saw that right: real time.
Here’s the crux of what makes this remotely feasible: latency has dropped dramatically. Actually, there are two kinds of latency. The first I’ll call spin-up latency, and that’s the time it takes to get a system going. Back when I was involved in this, it took a good five minutes or so to get a machine ready to run. That meant that, from a farm standpoint, in order to give users reasonable response, you always had to have an idle machine warmed up ready to allocate. Once it got allocated, then you needed to spin another one up. Waiting five minutes would be totally unacceptable to a user.
This spin-up time is apparently much lower these days; no machines need to idle in the background like trucks at a truck stop while the driver grabs a sloppy joe.
Then there’s simple communication latency during operation. And this has also gotten much better, apparently. This, aided by technologies like KVM (kernel-based virtual machine), is making it feasible, or potentially feasible in the not-too-distant future, to run real-time functions in the cloud. Seriously.
This seems, well, surprising, but, then again, there are lots of things I wouldn’t have believed possible that I now take for granted, so perhaps I’m just an old codger. The other thing, of course, is that you have to convince your customer that your system won’t have any issues with ¾ of its code running in the cloud. Would love to see the safety-critical folks approve that one!
I will be watching with riveted attention to see how this plays out.
Hey Jim, waddaya think? Have we finally found a use for the cloud that you like?
(Heck, not just Jim – what do the rest of you think?)
posted by Bryon Moyer
We’ve seen the move from APs managing sensors to MCUs acting as sensor hubs to integration of sensors with MCUs, as with ST. Well, Freescale has now jumped in as well, integrating an accelerometer with a 32-bit Coldfire V1 MCU into what they’re calling their Xtrinsic Intelligent Motion platform.
Given the number of IMUs out there integrating accelerometers, gyros, and magnetometers, I asked why they were just going with an accelerometer only. They said that, frankly, in earlier days, customers didn’t seem interested in sensor hubs. They’re coming around, and clearly they’ve made their way into phones. But now industrial and other customers are starting to take note. But they’re not so much interested in the gyros and magnetometers.
That’s not to say you can’t use them; you can add other sensors into the hub via their I2C/SPI connectivity.
One of the other challenges they see for users is the fact that most sensor hub environments are closed, limiting your choice of sensors and software. They’re trying to keep things as open as possible, allowing you to integrate whatever other sensors and software they want from any vendors they choose. Freescale will provide free software, but you’re not locked in.
You can find more information in their release.
posted by Bryon Moyer
As a newly-developed process is prepared for delivery into the production world, one of the last things that has to happen to effect a transition is the development of compact models for use in design simulation. And, of course, these days, such a model must account for process variations, which means covering the wide range of corners that a process can have to capture the statistics.
We’ve seen some of this before with Solido, but according to newcomer GSS, whom my colleague Dick Selwood explored at some length back when GSS was getting started, most other tools model processes as Gaussian, and the world isn’t actually Gaussian. Also, the existing tools help with simulation, but they don’t create a model.
Well, GSS has now released their tool. It works as a “wrapper” to a simulator, as these tools are wont to do. So it doesn’t do the actual simulation; you pick whatever simulator you want to work with, along with the circuit and model strategy, and the GSS tool creates the corners to be run – on the order of a couple thousand runs. Because of the independence of the runs, this scales perfectly with additional processors; you specify the number of processors and it handles the load management.
It then builds a model with statistical features that no other models have; the model itself has time-dependent capabilities for characteristics that evolve over time. In this manner, they say that it bridges TCAD and production EDA in a way that no other tool does.
They announced their tool in a somewhat indirect fashion, focusing on the possibility of unforeseen SRAM yield issues at the 20-nm node. You can see more about the details of that discussion in their release.