editor's blog
Subscribe Now

Parallel Accurate SPICE

SPICE has got to be one of the oldest tools still being used by designers. So you might expect it to be a mature market, with a few well-established tools battling for the best performance/capacity and/or accuracy (and occasionally even collaborating).

In fact, it’s typically been more about “or” than “and,” as there are generally two SPICE camps: the fast, high-capacity versions that are “good enough” for everyday repeated use as you explore design options, and sign-off-quality versions that are more accurate, but take longer to complete and can’t handle as large a design.

The tradeoffs between the fast/big and accurate versions are usually about simplifying assumptions and models and such. Parallel execution has also helped, although it’s entirely possible that long-in-the-tooth engines were not designed for effective parallelization.

So ProPlus has announced a new SPICE tool, NanoSpice, that leverages its BSIMProPlus high-accuracy engine for analysis of large designs with quick turnaround. They claim they can handle designs of 50-100 million elements 10-100 times faster than competing “traditional” approaches (many of which can’t complete the larger designs in ProPlus’s benchmarking suite).

While they have made some improvements to the performance of the underlying engine, they give most of the credit to parallelization, which scales relatively well (depending on the design – 24 cores giving 8-12x speed-up on most of their examples, with a multiplier design actually achieving around 20x). But what they underscore with this is that it still uses the model that BSIMProPlus uses, suggesting equivalent accuracy.

They also say that they’ve got a better licensing model for using parallelism. Traditional schemes simply use more licenses as you use more machines, but they say that this was largely configured for occasional bursty usage. If everyone is always using parallelism, then you typically run out of licenses that way.

Their solution? Well, I actually don’t know. They are keeping mum about that. So they say it’s different and better; you’ll have to be the judges of that.

You can find out more in their release.

Leave a Reply

featured blogs
Apr 19, 2021
Cache coherency is not a new concept. Coherent architectures have existed for many generations of CPU and Interconnect designs. Verifying adherence to coherency rules in SoCs has always been one of... [[ Click on the title to access the full blog on the Cadence Community sit...
Apr 19, 2021
Samtec blog readers are used to hearing about high-performance design. However, we see an increase in intertest in power integrity (PI). PI grows more crucial with each design iteration, yet many engineers are just starting to understand PI. That raises an interesting questio...
Apr 15, 2021
Explore the history of FPGA prototyping in the SoC design/verification process and learn about HAPS-100, a new prototyping system for complex AI & HPC SoCs. The post Scaling FPGA-Based Prototyping to Meet Verification Demands of Complex SoCs appeared first on From Silic...
Apr 14, 2021
By Simon Favre If you're not using critical area analysis and design for manufacturing to… The post DFM: Still a really good thing to do! appeared first on Design with Calibre....

featured video

The Verification World We Know is About to be Revolutionized

Sponsored by Cadence Design Systems

Designs and software are growing in complexity. With verification, you need the right tool at the right time. Cadence® Palladium® Z2 emulation and Protium™ X2 prototyping dynamic duo address challenges of advanced applications from mobile to consumer and hyperscale computing. With a seamlessly integrated flow, unified debug, common interfaces, and testbench content across the systems, the dynamic duo offers rapid design migration and testing from emulation to prototyping. See them in action.

Click here for more information

featured paper

Understanding Functional Safety FIT Base Failure Rate Estimates per IEC 62380 and SN 29500

Sponsored by Texas Instruments

Functional safety standards such as IEC 61508 and ISO 26262 require semiconductor device manufacturers to address both systematic and random hardware failures. Base failure rates (BFR) quantify the intrinsic reliability of the semiconductor component while operating under normal environmental conditions. Download our white paper which focuses on two widely accepted techniques to estimate the BFR for semiconductor components; estimates per IEC Technical Report 62380 and SN 29500 respectively.

Click here to download the whitepaper

featured chalk talk

RF Interconnect for 12G-SDI Broadcast Applications

Sponsored by Mouser Electronics and Amphenol RF

Today’s 4K and emerging 8K video standards require an enormous amount of bandwidth. And, with all that bandwidth, there are new demands on our interconnects. In this episode of Chalk Talk, Amelia Dalton chats with Mike Comer and Ron Orban of Amphenol RF about the evolution of broadcast technology and the latest interconnect solutions that are required to meet these new demands.

Click here for more information about Amphenol RF Adapters & Cable Assemblies for Broadcast