posted by Bryon Moyer
Inverters are getting smaller.
We’re talking here about the inverters used in solar cells to convert the DC that they generate into AC for the grid. But there seem to be a couple of different motivations for this reduction in inverter size; I was made aware of them by a two different product releases.
First came an SoC from Semitech. Semitech has primarily been focused on power-line communications (PLC) on the so-called Smart Grid. Their focus hasn’t so much been on residential settings, where broadband connections dominate, but rather longer-distance machine-to-machine narrowband connections. We’re talking hundreds of (electric) meters communicating over a few kilometers.
That said, they noticed an opportunity. Traditionally, a single inverter will serve multiple panels; this helps keep cost down (always an issue as solar struggles to compete with other forms of energy). But Semitech notes that there are some weaknesses with this arrangement. In particular, the one inverter becomes a single point of failure that can take all of its panels out of action. Efficiency also gets tuned to the needs of the worst (e.g., most shaded) panel – meaning that energy is wasted from the other panels.
The ideal would be a micro-inverter for each panel – something that’s generally been a cost challenge. So Semitech is trying to reduce that added cost by integrating the inverter electronics (not the transformers) into the PLC chip. So any inverter that was intended to communicate could get the inverter control circuitry almost for free (it’s a small add-on to the PLC circuitry, which dominates the chip).
(Click to enlarge)
Image courtesy Semitech.
By the way, apparently the same chip can be used for LED control if loaded with different software.
Meanwhile, ST Microelectronics announced a rather simpler product: an SiC diode. It replaces larger devices that have been needed in order to provide sufficient overcurrent margin. The new SiC diode can handle higher current spikes, contributing to a smaller inverter.
In this case, the small-inverter drive comes from a project driven by Google and IEEE called The Little Box Challenge. Here the idea is that smaller inverters will reduce the size of the cooler-sized box that’s currently needed for a residential solar installation. That makes it less of an eyesore, reduces the footprint, and – critically – reduces cost.
If you’re not part of the Challenge yet, it’s too late; registration is closed. The final prize will be announced next January.
That said, ST also seems heavily focused on the automotive market, saying that the new diode meets the requirements for such applications as on-board battery chargers for plug-in hybrids. It has a reverse breakdown of 650 V, and they boast zero recovery time.
Image courtesy ST Microelectronics
posted by Bryon Moyer
We’ve been talking about through-silicon vias (TSVs) for years now, but 2.5D and 3D ICs are still trickling out at the high end.
Processing costs aside, one contributor to higher cost is the impact of TSVs on die size. While we debate the best ways to save a nanometer or two here and there, TSVs operate on a scale three orders of magnitude bigger: microns. And a good part of the reason is aspect ratio: at the current limit of 10:1 or so, then, if you want a 150-µm deep hole, you’re going to need to make it 15 µm wide. If we could improve the aspect ratio, then we could narrow down those TSVs and release some silicon area.
One of the main limiters to the aspect ratio is the ability to fill them cleanly with metal. In order to ensure that there aren’t voids along any of the surfaces, a seed layer is needed. And that seed layer has to be deposited in a well-controlled, uniform manner.
For the metals used as the seed, physical vapor deposition (PDV) – where vaporized material condenses on surfaces in a vacuum – tends to work best. But PVD also is most effective when coating a horizontal surface. Seeding a TSV is most decidedly not horizontal. You need to cover the sides and the bottom at equal rates.
That challenge notwithstanding, Tango Systems announced a couple of months ago that they have now moved the aspect-ratio bar to 15:1, using PVD. They did this through a combination of control over plasma density and vacuum as well as having magnetons that oscillate under the target. So that 15-µm-wide hole we needed to get 150 µm deep? Now it needs to be only 10 µm wide. (Why bother saving 10 nm when you can save 5000?)
Having bumped the limit by 50%, Tango thinks that this 15:1 bar will last for a while. Yes, achieving deeper might have some benefit, but at the same time as this is happening, wafers are also being thinned more, which reduces the needed depth.
TSVs are but the first application they envision for this new technology. They say that it can also have benefit for MEMS (there’s some long-term news pending there), improving the deposition of backside metals, and – their next target – providing EMI shielding.
You can find more in their announcement.
posted by Bryon Moyer
High-level synthesis (HLS) recently got a round of improvement. Calypto’s Catapult 8 represents yet another fundamental renewal of an EDA tool for improving ease of use and quality of results.
Let’s review some basics. HLS generally refers to the use of C or C++ for specifying untimed design behavior. That’s actually caused some confusion, since SystemC is based on C++. So, loosely speaking, folks may use the term HLS to refer to designs built with either language.
To my mind, the aspect of HLS that really changes the game is moving from a hardware description language, where the focus is on hardware structure, to a software language, where the focus is on behavior. In other words, writing an untimed (that is, no notion of a clock) algorithm in C or C++ and then letting a tool create the hardware structure (in RTL) and the timing that implements the behavior. This is an enormous transformation.
SystemC, by contrast, is a hardware language; it just happens to rely on C++ syntax and semantics. While the “pure” C/C++ algorithm specifies behavior, not structure, SystemC specifies structure. It has its benefits, but the transformation to RTL is much less dramatic.
Calypto makes reference to “language wars”; people arguing over which is better. My sense has been that tool makers lobby for the language they support; that language may or may not be the best tool for a given job. Well, in Catapult 8, Calypto neutralizes this debate by supporting both untimed C/C++ and SystemC. Discussion over.
Catapult’s biggest contribution, historically, has been the creation of hardware from an untimed C/C++ algorithm. Originally created by Mentor Graphics, it was largely used by big companies (historically, it’s not been viewed as inexpensive) and wasn’t a push-button process. But over the years, many of the difficult issues have been handled, removing barriers to use.
So… why isn’t everyone using it then? I mean, what could be more attractive than being able to take a high-level snippet of code and have hardware created for it? (Besides price?)
That’s a question Calypto asked, and they identified three things to improve in Catapult 8:
- Hierarchy management for incremental design and minimizing the impact of ECOs
- Better tie-in with verification schemes
- Power optimization
The ECO problem was the same as has been an issue at RTL levels in the past: at the end of the design cycle, when you want to make a little change, the whole design gets resynthesized, undoing months’ worth of work. The design process has been top-down only, so each change forces this complete recompile.
Catapult 8 allows better management of the hierarchy, automatically maintaining metadata for each block (and allowing engineers to annotate metadata below the block level). If something in one block is changed, it’s possible to lock the other blocks down so that they’re not affected. This puts some bottom-up back into the process.
This also means that a design can proceed incrementally, getting portions working, locking them down, and then working on new portions. You can even plug together pre-synthesized blocks. This change has opened up design capacity as well, allowing 10x bigger designs.
As to verification, apparently the old code generation didn’t play so well with UVM. One practical impact of that was that verification tools might raise a number of issues with the Catapult-generated RTL. But that RTL isn’t considered readable by engineers – the whole point is to get designers focusing on the higher level – except that the tools didn’t translate issues and error messages into the higher level.
This is partly because of the change of design from structure to behavior. With C/C++, you can improve and speed up your functional coverage at the C/C++ level, but because there’s no structure yet at that level, you’ve not getting structural coverage – that’s what you’re nailed with when doing verification after you’ve created RTL.
So Calypto worked with a number of their customers to make changes that allowed HLS to drop into common verification flows. In particular, it:
- automatically synthesizes assertions and cover points;
- identifies points that you can test for equivalence;
- allows cross-probing between RTL and C/C++; and
- integrates with formal tools, which to identify unreachable states.
As to power reduction, design choices in prior versions had to be implemented in the design source. That process now involves constraints expressed separately. The design source code, which specifies the behavior, can remain unchanged while, off on the side, you play with constraints to achieve your power goals. The types of strategies involved include deciding which frequency to run at, which resources can be shared, automatic implementation of clock gating, and minimizing access to memory.
Finally, they’ve assembled a library of functional components optimized for use with Catapult 8. They call it Catware, and it’s basically an IP collection of functions that they find their customers frequently need. The intent is to get designers spun up faster.
You can get more details in their announcement.