posted by Bryon Moyer
I’m getting a sense that we’re back into the small-company-friendly phase of the EDA company cycle. A number of newcomers (which means they’ve been around working quietly for several years and are now launching) are knocking on doors.
Invarian is one such company, and they’ve launched two analysis platforms: their “InVar Pioneer Power Platform”, with power, IR-drop/EM, and thermal analysis, and their “InVar 3D Frontier Platform” for thermal analysis of 3D ICs.
Their claim to fame is that they’re the only tool that can handle true full-chip sign-off analysis at 28 nm and below, with SPICE accuracy and fast run times (“fast” being a relative term). In particular, for digital designs, they do concurrent analysis of timing, thermal, EM/IR, and power. Yeah, they have a timing engine – and they say it’s really good, too. But trying to displace PrimeTime as the gold standard is a tough call; that’s not their goal. So the timing engine serves the other pieces.
The whole concurrent thing means that, instead of running one analysis to completion and then handing those results to the next engine for different analysis, they run the engines together. As they iterate towards convergence, they update a common database on each cycle, so each engine is using a slightly-more-converged value from the other engines on every new cycle. They say that this speeds overall convergence, taking analysis that used to require several days to run and managing it in a few hours instead, with no loss of accuracy.
Of course, having a new tool also means that you can build in parallelism from the get-go, leveraging multicore and multi-machine computing resources.
For analog sign-off, they can do co-simulation with the usual SPICE suspects. And for 3D analysis for packages with multiple dice, they boast models that are more accurate and realistic than the standard JEDEC models. And they claim greater ease of use, making rules (which are constantly evolving) more manageable in particular.
You can find more in their release.
posted by Dick Selwood
The announcement that Cadence is planning to buy Evatronix marks the company's fourth acquisition in a matter of months. One slipped under the radar, what Martin Lund of Cadence referred to as "a small team in Canada working on high-speed SerDes." The purchase of Cosmic Software is waiting for Indian regulatory approval, and the Tensilica acquisition was completed a few days ago.
A few years ago, Cadence buying companies was not news – it was business as normal. Today, is it a return to the old company working practices? Well – no. The companies that Cadence was buying then were normally small(ish) suppliers of point tools. Today’s targets are IP companies, and join an IP pool established when Cadence bought Denali three years ago.
Synopsys already declares that about a third of its income comes from supplying IP. (Cadence wouldn’t be drawn on its IP sales, either current or projected.) And this ties in with the changing role of the EDA company and its position in the electronics food chain. It is no longer sufficient for a chip company to provide silicon: with large and complex SoCs the customers want the software stacks for the interfaces, drivers for the OS (and even the OS). This means that the chip companies need access to good quality IP to create the peripherals and additional material, and the EDA companies intend, as far as possible, to be the place that the designers turn to for this. Which explains Cadence’s acquisition.
What is interesting as well, is that with these companies that presumably have different development processes and design flows are expected to be integrated into an approach that Cadence is calling the “IP factory”, which will supply straight off-the-shelf IP and also create, within certain limits, customised IP for a specific chip builder’s application.
In the past, an EDA start-up would have acquisition as the exit route which would allow investors to get their returns. Today, perhaps, the road is to build IP?
posted by Bryon Moyer
We’ve all seen some of the crappy pictures that cell phones have allowed us to take and distribute around the world at lightning speed. (Is there such a concept as “photo spam” – legions of crappy pictures that crowd out the few actual good ones?).
Now… let’s be clear: much of the crappiness comes courtesy of the camera operator (or the state of inebriation of the operator). But even attempts at good composition and topics of true interest can yield a photo that still feels crappy.
Part of the remaining crappiness is a function of resolution: phone cameras traditionally have had less resolution than digital SLRs. So we up the resolution. And, frankly, phone resolution is now up where the early digital SLRs were, so the numbers game is constantly shifting as we pack more pixels into less space on our imaging chips.
But that comes with a cost: smaller pixels capture less light. Because they’re smaller and have fewer impinging photons. So higher-res chips don’t perform as well in low-light situations. (Plus, they traditionally cost more – not a good thing in a phone.)
There is an alternative called Super Resolution (SR), however, and to me it’s reminiscent of the concept of dithering. I also find the name somewhat misleading: it isn’t a super-high-res camera, but rather takes several low-res images and does some mathematical magic on them to combine them into a single image that has higher resolution than the originals. Like four times the resolution. It’s part of the wave of computational photography that seems to be sweeping through these days.
The way it works is that the camera takes several pictures in a row. Each needs to be slightly shifted from the others. In other words, if you take a static subject (a bowl of fruits and flowers) and put the camera on a tripod, this isn’t really going to help. One challenge is that, with too much shifting, you can get “ghosting” – if a hand move between shots, for example, you might see a ghosty-looking hand smeared in the combined version.
It’s been available as a post-processing thing on computers for a while, but the idea now is to make it a native part of cameras – and cameraphones in particular. Which is good, since I can’t remember the last time I saw someone taking a still life shot with a phone on a tripod. (Besides… fruits don’t do duckface well.)
In this case, the slight shaking of the holding hand may provide just the movement needed to make this work. But, of course, you need the algorithms resident in the phone. Which is why CEVA has announced that it has written SR code for its MM3101 vision-oriented DSP platform. They claim that this is the world’s first implementation of SR technology for low-power mobile devices.
Their implementation allows this to work in “a fraction of a second.” Meaning that it could become the default mode for a camera – this could happen completely transparently to the user. They also claim that they’ve implemented “ghost removal” to avoid ghosting problems (making it less likely that the user would want to shut the feature off… although for action shots? Hmmm…).
You can get more detail in their release.