Jul 28, 2015

CogniVue Drives at Mobileye

posted by Bryon Moyer

iStock_000068339495_Small.jpgCogniVue recently made a roadmap announcement that puts Mobileye on notice: CogniVue is targeting Mobileye’s home turf.

We looked at Mobileye a couple years ago; their space is Advanced Driver Assistance Systems (ADAS). From an image/video processing standpoint, they apparently own 80% of this market. According to CogniVue, they’ve done that by getting in early with a proprietary architecture and refining and optimizing over time to improve their ability to classify and identify objects in view. And they’ve been able to charge a premium as a result.

What’s changing is the ability of convolutional neural networks (CNNs) to move this capability out of the realm of custom algorithms and code, opening it up to a host of newcomers. And, frankly, making it harder for players to differentiate themselves.

According to CogniVue, today’s CNNs are built on GPUs and are huge. And those GPUs don’t have the kind of low-power profile that would be needed for mainstream automotive adoption. CogniVue’s announcement debuts their new Opus APEX core, which they say can support CNNs in a manner that can translate to practical commercial use in ADAS designs. The Opus power/performance ratio has improved by 5-10 times as compared to their previous G2 APEX core.

You can find more commentary in their announcement.


Updates: Regarding the capacity for Opus to implement CNNs, the original version stated, based on CogniVue statements, that more work was needed to establish Opus supports CNNs well. CogniVue has since said that they've demonstrated this through "proprietary benchmarks at lead Tier 1s," so I removed the qualifier. Also, it turns out that the APEX core in a Freescale device (referenced in the original version) isn't Opus, but rather the earlier G2 version - the mention in the press release (which didn't specify G2 or Opus) was intended not as testament to Opus specifically, but to convey confidence in Opus based on experience with G2. The Freescale reference has therefore been removed, since it doesn't apply to the core being discussed.

Tags :    0 comments  
Jul 23, 2015

Cadence Refreshes Synthesis and Formal

posted by Bryon Moyer

Cadence has announced a couple of major upgrades over the last month or two. They’re largely unrelated to each other – one is synthesis, the other formal verification – so we’ll take them one at a time.

A New Genus Species

First, synthesis: they announced a new synthesis engine called Genus. Another in a recent line of new products ending in “-us.” (Upcoming motto: “One of -us! One of –us!”)

There are a couple specific areas of focus for the new tool. But they tend to revolve around the notion that, during the day, designers work on their units; at night, the units are assembled for a block- or chip-level run.

At the chip level, physical synthesis has been possible to reduce iterations between place & route and synthesis, but unit synthesis has remained more abstract, being just a bit of logic design without a physical anchor. For both chip and unit levels, accuracy and turn-around time are important. Without physical information, unit accuracy suffers, making necessary more implementation iterations.

To address turn-around time, significant effort was placed on the ability to split up a full chip for processing on multiple computers. They use three levels of partitioning – into chunks of about 100,000 instances, then 10,000 instances, and then at the algorithm level.


(Image courtesy Cadence)

The challenge they face in generating a massively parallel partition is that multiple machines will be needed. Within a single machine, multiple cores can share memory, but that’s not possible between machines. That means that communication between machines to keep coherent views of shared information can completely bog the process down on a poorly conceived partition.

To help with this, they partition by timing, not by physical location. So the partitions may actually physically overlap. They also try to cut only non-critical lines when determining boundaries, making less likely that multiple converging iterations will be needed.

That said, they still do several iterations of refinement, cutting and reassembling, then cutting again and reassembling. And then once more.

At the lowest level of partition, you have algorithms, and these are small enough to manage within a single machine. Which is good, because shared memory is critical here to optimize performance, power, and area (PPA).

Larger IP blocks can undergo microarchitecture optimization, where the impacts of several options are evaluated and analytically solved to pick the best PPA result.

Once an assembled chip has been synthesized with physical information, the contributing units are annotated with physical information so that subsequent unit synthesis can proceed more accurately, again, reducing overall iterations.

They’re claiming 5X better synthesis time, iterations between unit and block level cut in half, and 10-20% reduction in datapath area and power.

You can find other details about the Genus synthesis engine in their announcement.

Jasper Gets Incisive

Meanwhile, the integration of their acquired Jasper formal analysis technology is proceeding. Cadence had their own Incisive formal tools, but, with a few exceptions, those are giving way to the Jasper versions. But they’re trying to make it an easy transition.

One way they’ve done this is to maintain the Incisive front end with the new JasperGold flow. So there are two ways to get to the Jasper technology: via the Jasper front end (no change for existing Jasper users) and with the Incisive front end, making the transition faster.


(Image courtesy Cadence)

Under the hood, they’ve kept most of the Jasper engines based on how well they work, but they also brought over a few of the Incisive engines that performed well.

We talked about the whole engine-picking concept when we covered OneSpin’s LaunchPad platform recently. In Cadence’s case, they generally try all engines and pick the best results, but they also allow on-the-fly transitions from engine to engine. This is something they initiated a couple years ago with Incisive; it now accrues to the Jasper engines as well.

They’ve also got formal-assisted simulation for deeper bug detection; they compile IP constraints first for faster bug detection (at the expense of some up front time); and they’ve provided assertion-based VIP for emulation to replace non-synthesizable checkers.

And the result of all of this is a claimed 15X improvement in performance.

You can get more detail on these and other aspects in their announcement.

Tags :    0 comments  
Jul 09, 2015

Motion for User Interfaces

posted by Bryon Moyer

precog-no-back-w-clouds.pngWe’ve looked before at ways of controlling machines with just your hands in the air, like you just don’t care. No touchy-feely, no mouse. Just jazz hands.

So at first, when I saw a demo of what we’re going to talk about today, I thought, “OK… this looks kinda like what I was seeing demonstrated a couple years ago by companies like eyesight and PointGrab.” And yet it also had a flavor of what I’d seen with Movea and Hillcrest, except that their technologies involved remote controls doing what just hands were doing in this case.

But what I was seeing wasn’t either of those technologies at work. Making it more confusing yet, this isn’t about a particular sensing technique – optical, touch, whatever. And yet it is about motion and location. While the announced technology may be brand new, you would probably have to use it to sense the difference. I was watching over a screen, so I frankly had to ask a lot of questions to figure out why this wasn’t just another gesture recognition announcement a few years after all the other ones.

I’m talking about Quantum Interface’s new interface called “Qi*.” It’s a way of taking location information and using changes to model motion – and, in particular, to predict where that motion is going and then turn that into information that a user interface can use. The result is, they say, smoother and faster navigation through user interfaces of any kind. Because of the prediction, you don’t have to “complete” motions as much; a little move in a direction will get you where you want to go faster than if you had to, say, track your hand in front of you.

This notion of only location as an input doesn’t involve any gestures. This is not about specifically identifying a gesture – whether static in your hand shape or a motion pattern that a user has to learn. It’s simply about, say, moving your hand or putting a finger on a surface and letting a well-constructed interface make the next movement obvious. Under the hood, the motion is turned into commands: this is specifically the part Qi does do.

It’s often about navigating menus; you move toward a menu that pops open, and then you settle on (or towards) an item and push your finger towards the screen and it takes you to a next-level menu, and so forth. All more quickly and smoothly than older approaches.

But here’s another subtle part: this is a mid-layer piece of technology. It lives above hardware – it will take location information from any system that can provide it, whether touch or optical (gesture or eye tracking or…) or whatever. It improves with multiple location sensors providing inputs.

It’s also not built into any specific user interface (UI): designers of interfaces can tap the information that Qi provides to drive the interface. Quantum Interface has a fair bit of experience using Qi to build UIs, so they do work with their partners in that realm, but that’s about using Qi; it isn’t Qi itself.

This middleness also makes it system-agnostic: you can create a consistent interface for different app platforms – say, phone, watch, and tablet – and tweak only for the details and resources available on that platform. Somewhat like skinning.

Not sure if I’ve said more about what Qi isn’t than what it is, but both are important since the nuances of what’s new are, well, nuanced. You can find more in their announcement.



*Regrettably, even given China’s large electronics footprint, where they would pronounce that “chee,” and given the wireless power technology Qi, pronounced “chee,” this is not pronounced “chee”: according to the press release, it’s pronounced like its initials, QI (“cue eye”), even though they opted to make the I lower case…


Image courtesy Quantum Interface

Tags :    0 comments  
Get this feed  

Login Required

In order to view this resource, you must log in to our site. Please sign in now.

If you don't already have an acount with us, registering is free and quick. Register now.

Sign In    Register