posted by Bryon Moyer
Consistent with a move towards standards in MEMS, NIST has released two reference chips that fabs can use to cross-check their measurement techniques. There are several critical parameters unique to MEMS, and five of them are captured by this 5-in-1 reference. NIST has done its own measurements, and the idea is to replicate the results they got. The chips are available for sale ($1735), along with the results for comparison.
The five tests covered are:
- Young’s modulus
- Residual strain
- Strain gradient
- Step height
- In-plane length
There are two versions of the chip: one (RM 8096) uses a CMOS process with bulk micromachining, meaning that the structures are etched out of the bulk silicon. The other (RM 8097) uses surface micromachining to build the structures out of poly.
Each chip has additional test structures for things like tensile strength and line widths.
posted by Bryon Moyer
Wizards are becoming more and more prevalent. Lest you’re concerned that Dumbledore’s relatives are coming to exact revenge, fear not: we speak here of wizards in the GUI (as opposed to gooey) sense. Yes, there are bastions of holdouts that cling to command line interfaces as a measure of their hacker bona fides, but there are solid reasons to like a well-designed wizard.
And “well-designed” is the operative phrase here. You may think of a wizard as no more than a way to simplify processes that could just as effectively be done using the command line if only you were boss enough to remember all the arcane intentionally-obscure commands required to get stuff done. And in some cases, such automation is the goal. But the potential goes beyond that: it’s an opportunity for a world-view transformation.
This is a favorite old topic of mine, but it was refreshed for me while watching a Movea SmartFusion Studio demo: when assembling a sensor fusion algorithm, a wide variety of filters are made available. And I thought, “How do you know which filter to pick??”
Now, you could easily argue that, if you don’t know your filters, then you have no business getting involved in sensor fusion. Perhaps. But really, a designer is interested in a behavior, not necessarily in knowing the details of how that behavior is implemented in some specific algorithm or circuit.
This became really clear to me several years ago on a consulting project where I was designing and prototyping a wizard for a piece of communication IP. The IP was very flexible, so there were lots of options that the user, who would be a system designer, could tweak. The obvious first approach to the wizard was simply to provide option fields for the user to fill in.
Being a communication protocol, it had FIFOs for elasticity; the user could dial up how big those FIFOs were to be. So I put a text field there for the size of the FIFO. But I asked the designers of the IP, “How should the user figure out how big the FIFO should be?” My first thought was that this information would be useful in the user manual. (Stop laughing… I’m sure someone reads those…)
They answered that the user would decide how many packets they wanted to buffer; that and the selectable packet size would determine the FIFO size. Simple enough.
But then I thought, “Wait, why are we making the user of this wizard do some paper-and-pencil calculations before going back to the computer? What if, instead of asking for the FIFO size, we asked for the number of packets?” The wizard already had the packet size somewhere else, so it then had all the information needed to calculate the FIFO size. No paper or pencil required.
posted by Bryon Moyer
Once you’ve got a circuit that you think is what you want, you have to make sure it works across the entire range of conditions and scenarios to which it might get exposed in real life. So you need to set off a suite of SPICE simulations to get that confirmation. You’ll execute a matrix of combinations and, if everything looks copacetic, you’re good to go.
Doing this, often a series of command-line instructions, can be laborious, so Berkeley Design has released ACE, a tool for automating this process of setting up all the runs – and repeating them in the likely event that your first go at it turns up some issues.
But this has a very familiar sound: tools like Solido and GSS and such also wrap a simulator and run a wide range of simulations under different conditions. It feels even mushier when you see that they can both do Monte Carlo analysis. So I asked the BDA guys about this to help establish a more concrete distinction.
Here’s the deal: ACE and characterization in general subjects your circuit to a known set of corners and scenarios and answer the question, “Does it work?” The other tools are used both to determine what the corners of an analog circuit are and to answer the question, “How can I optimize my circuit?” Characterization is more of a yes/no thing; optimization, of course, allows you to re-center your design or make other changes to improve yield.
Or at least that’s how BDA defines the distinction. (If you think otherwise, please do comment…)
You can find out more about BDA’s ACE in their release.