feature article
Subscribe Now

Expanding EDA

Newer Tools Let You Do More than Just Electronics

Welcome to autumn. It’s usually a busy season – although the activity typically starts more with the onset of September and the resumption of school than with the equinox. But it also comes on the heels of a quiet season, even in the overworked US.

And EDA has seemed moderately quiet. So I started looking around to see what I might have been missing, and I’m not sure there’s a lot. But it did get me musing on why things might be quiet for the moment as well as what fills the gap – which gets to the topic of what qualifies as EDA. It’s more than you might think.

At the risk of being obviously over-simple, the legions of coders in EDA-land are doing one of two things: building new technologies or improving on old ones. The new technology category might include support for FinFETs or multi-patterning or the design kits for the latest silicon node. The improvement side of the tree is where performance and capacity and usability are juiced up – all in the name of productivity, of course.

Looked at another way, one side lets you do new things; the other side lets you do the old things better. (We won’t get into the fixing of bugs… Bugs? What bugs?)

Which of these is in play at any moment depends on where we are in the node cycle. We’ve recently seen a huge push to spin up the new technologies required in order to take us all the way down to 14 nm, but it’s very early days for that node. It becomes a bit of a gamble on the part of EDA companies with each new technology: design tools are needed to prove out the technology, but customers won’t be using it until years later.

The good news there is that the tools can start out rough-and-ready; while the fab guys are finalizing design rules and reference processes and design kits, the new algorithms and user interfaces can be smoothed out. By the time designers are using them in force, they’ve had lots of time to mature. (Although there’s nothing like having lots of designers pounding on your tool to find the weak spots…)

It feels a little like we’re at that point now where some of the heavy lifting for the newest nodes has been done, and now we wait for uptake. And it’s going to be a while before folks flock to the 14-nm node…

So what happens in the meantime? That would bring us to the other side of things, where existing stuff gets better. This can involve the re-engineering of some pesky code to blow through a bottleneck, partitioning algorithms for multi-threaded and distributed computing, or complete data model re-inventions and start-from-scratch rebirths. Or new interfaces, or integrations between tools that make a designer’s life easier or more intuitive – and, critically, let him or her get things done more quickly.

Because, even if we have no new transistor types or if cheap EUV shows up tomorrow and obviates multi-patterning (relax – no, it’s not going to happen), designs are still getting bigger and chip houses are still competing.

That means that they need to do more faster, which pushes performance, and they need to manage larger designs, which pushes the memory footprint. The cloud? Seems to have been a big “meh.” Or worse. It mostly doesn’t seem that companies are forging ahead in the skies, largely due to family jewel concerns.

So… if there’s not a lot of new-capability news going around, then… do EDA news outlets go dark for a while? Of course not. But a look around shows some of what’s happening: IP and embedded systems and even – gasp! – software are filling the void.

Now, IP and EDA have always gone together (at least for silicon IP). But embedded system design, well, that’s generally been considered a completely different beast. Especially the software part. And, in a way, software is like FPGAs: folks don’t expect to pay a lot for tools. Which is not exactly the culture of EDA. But that has everything to do with The Stakes. And The Stakes have been raised.

The thing about software in particular is this: you write it, you test it, and it works after some fixes. Easy peasy. OK, maybe it’s not quite that simple, what with the immense complexity of much software. And coders hate testing stuff. So… you write it, it seems to work, you check the obvious bits, and then you hand it to some integrator or testing group to handle the serious testing. Which they – hopefully – do.

But here’s the critical point: if they miss something – well, then you put a patch up on the web and, just like that, the problem is gone.

Entirely unlike an integrated circuit.

Embedded system hardware is somewhere between software and ICs. Board design has the potential to be rather sophisticated, depending on how much stuff you’re trying to cram where. But the costs of masks and rework for a board are nothing like those needed for an IC.

And that’s the thing that keeps people writing checks for EDA tools: the mask set is simply too damn expensive to replace (not to mention any work-in-progress, or WIP, that might have to be tossed). So you have to get it right the first time, and it takes expensive tools to do that.

But, memories and FPGAs aside, the biggest chips we’re making these days are systems-on-chip (SoCs). Or, perhaps better stated, embedded-systems-on-chip. So we have the complete merging of IC design and embedded system design. Including software. Yes, the software part may be updatable, but, for users, it’s much easier to update an application on a computer than to re-ROM low-level firmware in a device that users don’t even know has a computer in it. So getting the software right is now more urgent. Which is a big reason why emulation is in vogue.

So the role of EDA is increasingly growing to include embedded technology. Is there anything else that’s joining the party? Aside from board-layout software?

Well, there are two newish technologies increasingly being built on silicon wafers. One is something we’ve spent a lot of energy on: MEMS. Tools like MEMS+ from Coventor (whose 5.0 edition was recently released) and SoftMEMS. The thing is, “EDA” stands for “electronic design automation.” And, just to push the associative property here, I’m pretty sure that means “automation of electronic design,” not “design automation by electronic means.” And MEMS isn’t electronic; it’s mechanical. Which is usually handled by CAD tools (“computer-aided design” – and, in this case, that does mean “design done on computers”).

The other non-electronic technology peeking its nose under the tent is optical, which we’ll discuss at greater length soon. But, here again, this is a technology that’s poised to invade the silicon landscape.

Yet another is fluidics, along with other health-related technologies combining electronics with disciplines like fluid dynamics and chemistry (or biochemistry). The fluids gotta get where they gotta get, and something in that tiny space has to be able to identify the one or more substances being assayed within those fluids. Or within the atmosphere.

Finally, packaging design is merging more closely with chip design. This isn’t so much a matter of the packaging creeping onto the chip in some Escheresque involution, but rather reflects the need to include the effects of package and chip together on the overall design. The package and chip design teams are still different, but simulation now needs to cross the boundary to get from physical pin in to driving transistor. 2.5D and 3D chip stacks blur these lines even more.

So wafers, which were once the safe refuge of electronics mavens, are now hosting lots of outsiders – from embedded to mechanical to optical to plumbing to chemistry. It’s bringing together tools from different domains that used to reside in their own silos. Now they need to work together, with coordination at the boundaries where the various “physicses” are transduced into electronic form.

So, even if no new transistors are born (not gonna happen) and no radical technologies raise their heads (also not gonna happen – see DSA for an example), there is still lots of work to do to transform the multi-faceted elements that constitute a complete system into a working silicon unit.

There will be no lack of new stuff to talk about.

3 thoughts on “Expanding EDA”

  1. Given that the EDA guys are a rather non-“Agile” bunch I’m not sure they’re going to get much done.

    Personally I think you can re-apply a lot of EDA tools aimed at hardware design at parallel software design. Parallel stuff is hard to debug, so formal methods are good, and timing analysis is important for managing communication.

    Are there really “legions of coders in EDA-land”?

Leave a Reply

featured blogs
Sep 21, 2023
Wireless communication in workplace wearables protects and boosts the occupational safety and productivity of industrial workers and front-line teams....
Sep 26, 2023
Our new AI-powered custom design solution, Virtuoso Studio, leverages our 30 years of industry knowledge and leadership, providing innovative features, reimagined infrastructure for unrivaled productivity, and new levels of integration that stretch beyond classic design bound...
Sep 21, 2023
At Qualcomm AI Research, we are working on applications of generative modelling to embodied AI and robotics, in order to enable more capabilities in robotics....
Sep 21, 2023
Not knowing all the stuff I don't know didn't come easy. I've had to read a lot of books to get where I am....
Sep 21, 2023
See how we're accelerating the multi-die system chip design flow with partner Samsung Foundry, making it easier to meet PPA and time-to-market goals.The post Samsung Foundry and Synopsys Accelerate Multi-Die System Design appeared first on Chip Design....

Featured Video

Chiplet Architecture Accelerates Delivery of Industry-Leading Intel® FPGA Features and Capabilities

Sponsored by Intel

With each generation, packing millions of transistors onto shrinking dies gets more challenging. But we are continuing to change the game with advanced, targeted FPGAs for your needs. In this video, you’ll discover how Intel®’s chiplet-based approach to FPGAs delivers the latest capabilities faster than ever. Find out how we deliver on the promise of Moore’s law and push the boundaries with future innovations such as pathfinding options for chip-to-chip optical communication, exploring new ways to deliver better AI, and adopting UCIe standards in our next-generation FPGAs.

To learn more about chiplet architecture in Intel FPGA devices visit https://intel.ly/45B65Ij

featured paper

Intel's Chiplet Leadership Delivers Industry-Leading Capabilities at an Accelerated Pace

Sponsored by Intel

We're proud of our long history of rapid innovation in #FPGA development. With the help of Intel's Embedded Multi-Die Interconnect Bridge (EMIB), we’ve been able to advance our FPGAs at breakneck speed. In this blog, Intel’s Deepali Trehan charts the incredible history of our chiplet technology advancement from 2011 to today, and the many advantages of Intel's programmable logic devices, including the flexibility to combine a variety of IP from different process nodes and foundries, quicker time-to-market for new technologies and the ability to build higher-capacity semiconductors

To learn more about chiplet architecture in Intel FPGA devices visit: https://intel.ly/47JKL5h

featured chalk talk

EV Charging: Understanding the Basics
Sponsored by Mouser Electronics and Bel
Have you ever considered what the widespread adoption of electric vehicles will look like? What infrastructure requirements will need to be met? In this episode of Chalk Talk, I chat about all of this and more with Bruce Rose from Bel. We review the basics of EV charging, investigate the charging requirements for both AC and DC chargers, and examine the role that on-board inverters play in electric vehicle charging.
Mar 27, 2023