posted by Bryon Moyer
Any new foundry would want to grow up to be a megalith like TSMC, right? Isn’t that how you prove you’ve “made it”? Well, not if you’re Novati. They’re a different sort of foundry, one you don’t hear about so often over the noise of the Big Guys.
Here’s the thing: when you’re in the foundry mainstream, you do one thing: you chase Moore’s Law and try to keep it going. You figure out what the masses want, and you trim everything extraneous away so that you can sate the masses in enormous volumes at competitive costs.
But what if you’re in the market for something that can’t be made using the techniques that suit the masses? That’s where smaller… ok, I’m going to use the dreaded word (investors: please cover your ears): niche players can find plenty of business, even if, by so doing, they can maybe achieve only kilolithic or decalithic status.
I met with them at Sensors Expo. Sensors are a typical opportunity for a more flexible fab, since they may use unusual techniques and materials, and each one may be slightly different, making it hard to put everyone onto one high-volume recipe.
Novati does CMOS and MEMS (particularly silicon microfluidics) – jointly and severally. When jointly, with both on the same wafer, they typically do MEMS-last, placing the MEMS elements above the CMOS circuitry. They can do this either by growing more silicon epitaxially over the CMOS or by stacking a separate wafer.
They also work on silicon photonics projects and 2.5D (silicon interposer) and 3D integration.
Most of what they do leverages a common set of equipment (largely for 200-mm wafers, with some 300-mm ones), but where the diversity really comes in is with materials. They can work with 60 different elements – far more than would be found in your average foundry.
Most foundries want to keep the number of elements they allow through the door to the absolute minimum. A new material, if not handled carefully, brings with it the risk of unexpected contamination with potentially calamitous results – something that’s just not worth messing with if you’re spinning oodles of wafers an hour.
But smaller guys need to be more flexible, and a willingness to work with more materials can be a boon to developers trying new ideas. Gold is the one element that Novati is particularly careful with: They segregate that in a separate room. For all the others, they study each one under consideration and develop specific protocols to ensure that the material goes only where they want it to. Which may be limited to some nanolayer a few atoms thick laid down by atomic layer deposition (ALD) on a wafer.
Once a project gets to production volumes, they can handle it to an extent, but they may also hand off to a partner that can handle higher volumes. Of course, if the volume production involves odd materials, then they’ll need to work with someone willing to handle that material.
As with any business, there’s always opportunity on the fringes of the mainstream. In this case, they’re entertaining many of those opportunities; they’re just being careful not to step on Moore’s toes.
You can find out more on their site.
(Image courtesy Novati)
posted by Bryon Moyer
You may recall that PNI Sensors has a sensor hub called SENtral. It represents a unique partitioning between hardware and software intended to lower its power and size. Its focus was primarily motion-oriented sensors, which, at the time, were the bulk of what system designers were paying attention to.
Since then, Google has issued their sensor requirements for Android 4.4 (Kit Kat). It requires very specific sensors, some of which are actual physical sensors, and others of which are “virtual” sensors – fused out of data from the real sensors. A step counter is an example of a virtual sensor: There is no hard step counter in any device, but the information from the inertial sensors can be combined to create the step counter.
So PNI Sensor has updated their SENtral hub to meet the Kit Kat requirements; they call it SENtral-K. It supports more sensors than their original version did, meeting the list that Google has sent down. Some of what the –K version does could have been done in the older one by adding new functions in the RAM space; this new version implements the functions in the ROM space.
One of their focuses is on what they call “simultaneity.” The idea is that it takes time to do the calculations required for the virtual sensors, and yet Android doesn’t accept excuses for virtual sensors. Heck, it thinks it knows which sensors are real and virtual, but in fact it doesn’t. (For example, the gyroscope could be a “soft gyro”).
What that means is, if you’re sampling your real sensors at 100 Hz, then Kit Kat expects all sensors – real or virtual – to be available at 100 Hz. Which means the calculations better be fast enough to keep up with that. Yeah, they’re not rocket science, but we’re talking tiny platforms drawing as little power as possible, making the burden non-trivial.
That power is lowered by implementing many of the fusion algorithms in hardware. They claim to be the lowest power, at least against microcontroller-based sensor hubs, with under 200 µA at 1.8 V, which is 360 µW. That would appear to be higher than QuickLogic’s claimed 250 µW (yes, that’s for their wearable version, but it’s the same hardware as the Kit Kat version – just different libraries), but it’s an order of magnitude less than what they show for Cortex-based hubs.
The other Kit Kat requirement they meet is that of “batching.” In and of itself, that term isn’t particularly helpful, since I can imagine a number of ways of batching sensor data. A conversation with PNI’s George Hsu clarified Google’s intent, and it wasn’t one of the scenario’s I had envisioned.
The idea is that the real sensors, from which all the virtual sensors are determined, should be buffered for some amount of time – like 10 s or so (there’s no hard spec on the time; it’s left to designers to do the right thing for their systems). If something goes wonky with the calculation and the application processor (AP) sees a sensor value that it finds suspect, it can actually go back to the original sensors, grab the historical raw data, and redo the calculations itself to confirm or correct the suspect values.
SENtral buffers five sensors: the accelerometer, the gyroscope (with and without bias correction) and the magnetometer (with and without offset correction). The buffer size is flexible; it uses RAM, and so the available RAM must be allocated between buffers and any other functions using the RAM.
Oh, and they go to pains to point out that this thing is small. (I’ve seen it; it’s small.)
Image courtesy PNI Sensor
You can find more in their announcement.
posted by Bryon Moyer
Silicon chips and the packages that house them have been steadily drawing closer to each other over the years. There are so many pins on individual dice now – and multiple dice are going into single packages. Optimizing which bumps from which dice go to which pins is a non-trivial project.
Part of the problem is that package design and die design have traditionally belonged to different departments using completely different tools that don’t talk to each other. That’s left engineers using Excel and such to try to visualize and plan pinouts.
The bulk of this isn’t changing – there is, as far as I know, no ubertool coming that includes both silicon and package design. But what can change is the means of planning the pinout – going to something more robust and efficient than Excel.
The same problem exists, by the way, with board design. Obviously, production board design is a process completely independent of die design; the die is designed once and then used on any number of boards. But optimizing board routing can also be challenging. Not to mention that doing some trial PCB layouts when planning the die isn’t a bad idea either.
Making this easier is the goal of Cadence’s new OrbitIO tool. It allows visualization and planning of signals. Because it couches its results as routing instructions and constraints, it’s a more dynamic way of planning; changes can be made with less work.
Once planned in OrbitIO, the results get pushed down into silicon design tools – Encounter or Virtuoso – in the form of a LEF/DEF die abstract file and to their multi-die package design tool, SiP-XL, via package data. They also get pushed to Allegro PCB on the board side, meaning that the die pinout’s effect can be evaluated all the way through to trial PCB layouts.
Image courtesy Cadence
In terms of evaluating what a “good” layout is, that’s partly visual, but the tool also provides lengths and number of routing layers as figures of merit. Note, however, that this analysis only goes as far as the pad ring on the die. Once planned, the effects of the pinout can be analyzed in the silicon design tool based on the data pushed to the tool by OrbitIO.
OrbitIO can be used most effectively if done prior to die floorplanning – it becomes an input to that floorplanning process. By handing data back and forth, the tools eliminate some of the tedious and error-prone steps that have to be taken with Excel and other hacks. Then pinout helps drive routing internally.
For multi-die packages, OrbitIO can work with dice in design, where pinouts can theoretically still be moved about, or with fixed dice – say, for a memory chip that’s being included in the package. The memory chip has no flexibility – it is what it is, so the tool needs to accommodate that.
If pinout is planned or changed after much of the die layout is in place, then the silicon tool can help evaluate the impact of the pinout on the die layout, and iteration is likely to find the best compromises.
You can find out more in Cadence’s announcement.