Jul 24, 2014

QuickLogic Goes Wearable

posted by Bryon Moyer

We’ve looked at QuickLogic’s sensor hub solution in quite some detail in the past. It’s programmable logic at its heart, but is sold as a function-specific part (as contrasted with Lattice, who sells a general-purpose low-power part into similar applications). QuickLogic recently announced a wearables offering, which got me wondering how different this was from their prior sensor hub offering.

After all, it’s really kind of the same thing, only for a very specific implementation: gadgets that are intended to be worn. Which are battery-powered and require the utmost in power-miserliness to be successful.

You may recall that QuickLogic’s approach is an engine implemented in their programmable fabric. They’ve then put together both a library of pre-written algorithms and a C-like language that allows implementation of custom algorithms; in both cases, the algorithms run on that engine. So the question here is, did the engine change for the wearable market, or is it just a change in the algorithms?

QL_arch.png

Image courtesy QuickLogic

I checked in, and they confirmed that the engine has not changed – it’s the same as for the general sensor hub. What they have done is focus the libraries on context and gesture algorithms most applicable to the wearables market.

Sometime back, we looked at how different sensor fusion guys approach the problem of figuring out where your phone is on you. A similar situation exists for wearables in terms both of classifying what the wearer is doing and the gadget’s relationship to the wearer. QuickLogic’s approach supports 6 different states (or contexts): walking, running, cycling, in-vehicle, on-person, and not-on-person.

They’ve also added two wearable-specific gestures for waking the device up either by tapping it or by rotating the wrist.

Critically, they do this with under 250 µW when active.

You can read more in their announcement.

Tags :    0 comments  
Jul 23, 2014

Intelligent VIP

posted by Bryon Moyer

This year’s DAC included a discussion with Arrow Devices. They’re a company exclusively focused on protocol VIP. They’re not a tool company (other than, as we’ll see, their debug assistant); their VIP plugs into any of your standard tools.

There are three distinct angles they play: verification (making sure your design works in the abstract, before committing to silicon), validation (making sure the silicon works; they also include emulation models in this as well), and debug.

Their focus is on protocol abstraction: allowing verification to proceed at a high level so that designers can execute their tests and review the results at the level of the protocol rather than at the signal level. This enhanced semantic intelligence is how they claim to distinguish themselves from their VIP competition, saying that verification can be completed two to three times faster as compared to competitive VIP.

The verification suites consist of bus-functional models (BFMs) and suites of tests, coverage, and assertions. These work in virtual space. The validation suites, by contrast, have to be synthesizable – hence usable in emulators. They include software APIs and features like error injection. Their debugger is also protocol-aware, although it’s independent of the VIP: it works with anyone’s VIP based on modules that give the debugger the protocol semantics.

One of the effects of digging deep into a protocol is that you occasionally uncover ambiguities in the standards. When they find these, they take them in a couple of directions. On the one hand, they may need to build option selections into the VIP so that the customer can choose the intended interpretation. On the other hand, they can take the ambiguities to the standards bodies for clarification.

On the debug side of things, the protocol awareness ends up being more than just aggregating signals into higher-level entities. When testing a given protocol, the specific timing of signals may vary; a correct implementation might have some cycle-level variations as compared to a fixed golden version. So they had to build in higher-level metadata, assigning semantics to various events so that the events can be recognized and reported. This tool works at the transaction level, not at the waveform level; they’re looking at connecting it to a waveform viewer in the future.

Their protocol coverage varies.

  • For verification, they cover the JEDEC UFS (Universal Flash Standard) protocol, MIPI’s M-PHY, UniPro, and CSI-3 protocols, and USB power delivery and 2.0 host/device protocols.
  • For validation, they cover only USB 3.0 – but they also claim to be the only ones offering VIP for USB 3.0.
  • Finally, the debugger has modules supporting USB 3.0 and 2.0; JEDEC UFS; PCIe/M-PCIe, MIPI UniPro, CSI-3 and -2, and DIS; and AMBA/ACE/AXI/AHB/APB.

You can find out more on their site.

Tags :    0 comments  
Jul 22, 2014

SiTime Adds Temperature Compensation

posted by Bryon Moyer

SiTime came out with a 32-kHz temperature-compensated MEMS oscillator a few weeks back, targeting the wearables market. 32 kHz is popular because dividing by an easy 215 gives a 1-second period. Looking through the story, there were a couple elements that bore clarification or investigation.

Let’s back up a year or so to when they announced their TempFlat technology. The basic concept is of a MEMS oscillator that, somehow, is naturally compensated against temperature variation without any circuitry required to do explicit compensation.

At the time, they said they could get to 100 ppb (that’s “billion”) uncompensated, and 5 ppb with compensation. (The “ppb” spec represents the complete deviation across the temperature range; a lower number means a flatter response.) This year, they announced their compensated version: They’re effectively taking a 50 ppm (million, not billion) uncompensated part and adding compensation to bring it down to 5 ppm. I was confused.

On its face, the compensation is a straightforward deal: take the temperature response of the bare oscillator and reverse it.

Figure.jpg

Image courtesy SiTime

But what about the “millions” vs. “billions” thing? Why are we compensating within the “millions” regime if they could get to ppb uncompensated?

Turns out, in the original TempFlat release, they were talking about where they think the TempFlat technology can eventually take them – not where their products are now. For now, they need to compensate to get to 5 ppm. In the future, they see doing 100 ppb without compensation, 5 ppb with compensation. That’s a 1000x improvement over today’s specs. Critically, from what they’ve seen published by their competition, they say that they don’t see their competitors being able to do this.

So, in short: ppmillions today, ppbillions later. These are the same guys, by the way, that have also implemented a lifetime warranty on their parts.

There was one other thing I was hoping I’d be able to write more about: how this whole TempFlat thing works. We looked at Sand 9’s and Silicon Labs’ approaches some time back; they both use layered materials with opposing temperature responses to flatten things out. So how does SiTime do it?

Alas, that will remain a mystery for the moment. They’re declining to detail the technology as a competitive defense thing. The less the competition knows…

You can read more about SiTime’s new TCXO in their announcement.

Tags :    0 comments  
Get this feed  

Login Required

In order to view this resource, you must log in to our site. Please sign in now.

If you don't already have an acount with us, registering is free and quick. Register now.

Sign In    Register