posted by Bryon Moyer
Last year we took a look at a couple of proposals for universal processes from Teledyne/DALSA and CEA-Leti that could be used to make many different MEMS elements, trying to move past the “one product, one process” trap. We’ve also reported on the AMFitzgerald/Silex modular approach and their first device.
Well, the first design using CEA-Leti’s M&NEMS process has rolled out: a single MEMS chip with three accelerometers and three gyroscopes designed and implemented by Tronics. They’re not quite the smallest of the 6-DOF sensors, but they claim that, with more optimization, they will be. Right now their die size is 4 mm2. And they say that all main parameters are on track with their simulation models.
But this is just the first functional version; they’re going back to work some more while, at the same time, giving it a companion ASIC, releasing them at the end of this year.
They’re also using the same process to create a 9-DOF sensor set, with all of the sensors on a single MEMS chip. Also for release at the end of the year. And, the idea is, that, if they wanted to, they could also include a pressure sensor and a microphone, since they can all presumably be made on this same process. Yeah, you might wonder whether integrating a microphone with those other sensors has value; even if it doesn’t, being able to make it separately using the same process as the n-DOF chip still brings a huge level of manufacturing simplification.
These efforts, if successful, could represent a fresh breath of efficiency for some of the oldest sensors in the MEMS world. The industry also has new MEMS elements in the works, like gas sensors and such. If a standard process like this could be used for new sensors as well, then at some point new sensors could launch on standard processes rather than having to do the “one process” thing first like accelerometers and their ilk have done.
There are those who believe that these standard processes are too restrictive to allow the design of sensors with arbitrary characteristics. We’ll continue to keep an eye on this stuff to see whether these common-process skeptics can eventually be appeased or whether they’ll be proven correct.
Check out the details in Tronics’s release.
posted by Dick Selwood
OK –not the most attention grabbing headline, but it is what Atego are calling the total revamp of their modelling tools. System modelling is one of those techniques which have been around for a long time, but outside a group of high-end companies, mainly in aerospace and automotive, has never really taken off. Atego’ is hoping to change this. Atego was formed by the merger of Artisan and Aonix, both system development tools suppliers. Since the merger the company has made a number of acquisitions of specialist tools. Today they are launching Vantage, which is not just a relabeling of their tools, but a complete rework to provide a portfolio for the complete lifecycle of complex products, such as cars or trains.
There are three threads to the approach Model-based Systems and Software Engineering (MBSE), Asset-based Modular Design (SoS/CBD/SOA) and variable Product Line Engineering (PLE). Atego claims that the combination of these three proven approaches into Model–based Product Line Engineering (MB-PLE) can reduce development costs by 62% and bring 23% more projects in on time.
This is an implementation of ISO 26550-2013 ‘Software and systems engineering - Reference model for product line engineering and management’ and ISO 15288 ‘Systems and Software Engineering – System Lifecycle Processes’ standards.
The detail of each of these three threads is a bit of an alphabet soup. But simplifying to the point of caricature, what modelling now provides is better reuse, a system that provides for a range of modules within the model, each of which may be one of a number of variants. One set of variants might be a cars different engine options, another set the gearbox options. The power train model is a combination of engine and gearbox, but certain combinations will be illegal. The modelling also provides a requirements capability and the ability to match the requirements of specific standards, in automotive again this is ISO 26262.
This is not modelling for the masses, but for people working on complex systems, with variants, and with a long product life, should certainly look closely at this approach. Everyone should take time to look at the new video on the Atego web site http://www.atego.com/downloads/videos/introducing-ategos-mb-ple.mp4
posted by Bryon Moyer
In the first major emulator news since Synopsys gobbled up EVE, Synopsys announced the next generation of the EVE platform, ZeBu 3. And, as with pretty much any emulator story, the top line has to do with capacity and performance: how much design can I cram in there and how fast will it go?
They claim industry-leading 3 MHz (with one example going as high as 3.5 MHz), as compared to what they say is a competition range more around 1-1.5 MHz (I’ll let the comps comment on whether or not that’s a representative number). As to capacity, you can stitch up to 10 of their boxes together for a total of 3 billion gates.
They also mention a number of different use modes for emulation, which are morphing as capabilities both inside and outside the emulator evolve. One in particular caught my eye because of how it contrasts with past usage.
Once upon a time, a significant use model for an emulator was to accelerate simulation. If there was a piece of the hardware that was taking too long to simulate – and in particular if it didn’t need simulator-level observability (remember: in a simulator, you can theoretically access every node; in actual hardware, you can only access those nodes that have been provisioned for access) – then you could implement that function in hardware and have the simulator call it as needed.
That ended up shining the spotlight on a significant bottleneck: handing off the function to the emulator, which required specifying pin-level signals across the interface. This led to the development of the transaction-based SCE-MI 2 interface, which abstracted away the detailed pin-level interface, making it all go so much faster.
That’s all old news. As emulator capacity and speed have improved, the focus has moved more to acceleration of software execution in SoCs. Not only does the emulator execute the software more quickly than a simulator can, features like save and restore can allow you to capture the state, say, after boot-up, and start there rather than having to go through the entire boot sequence every time. Yes ,you could theoretically do this with simulation as well, but simulating software just takes too long.
So we’ve gone from mostly verifying by simulation (on a PC) to doing much more of the verification on an emulator, now that it’s big enough. But you know… we’re never satisfied, are we? Give us an inch, and we want another inch. Yes, we can run software fast, but we don’t care about all of the software, or perhaps we don’t care about all of it in as much debug detail. Believe it or not, this software is taking too long to run on the emulator.
So what to do? How about running it on a virtual platform? Virtual platforms abstract away the low-level execution details, and so they can run much faster. So now, in a complete role reversal, the emulator can offload software execution to a PC running a virtual platform, which acts as an accelerator for the emulator – the very same emulator (or a bigger, faster version) that used to be an accelerator for the PC doing simulation. Synopsys refers to this as “hybrid mode,” one of the various use modes that ZeBu 3 supports.
What goes around…
You can get more details on all of those modes as well as the other speeds and feeds in their release.