feature article
Subscribe Now

Timing Closure in 2011

What Is the Key?

[Editor’s note: Atrenta’s Ron Craig and Magma’s Bob Smith got together to provide two viewpoints on what is going to be most important for timing closure in 2011. What follow are their thoughts.]

What is timing closure in the 2011 SoC design environment?
Ron Craig, Senior Marketing Manager, Atrenta Inc.

Backend timing closure is an incredibly inefficient process, fraught with uncertainty. In an ideal world, an engineer would need to run a single pass through the synthesis to GDSII flow and he’d be done.  But the reality isn’t that simple. I’ve heard stories from well-known companies of up to 20 backend iterations to close timing, with each iteration taking up to a week. Evidence such as this would suggest that timing closure in the backend is no longer working – that this “tortoise” will indeed eventually get there, but at what cost? Is there a way to add certainty to the process?

In my own experience, there are two basic approaches to reach timing closure. In the first instance, if you find that you are pretty close to meeting timing after a given implementation step, you simply throw another tool switch, cross your fingers and repeat that step. If, however, you are way off, you look at how the design can be re-architected (pipelining, path restructuring, etc.). In both cases timing “closure” is effectively replaced by timing “experimentation,” at a stage of the design process where iterations are potentially deadly. One engineering director friend of mine talks of the “holiday tapeout plan”: an initial target of Labor Day slips to Thanksgiving, then Christmas, and closure is eventually reached by Valentine’s Day.

All of this is further complicated by the fact that timing-driven implementation steps are extremely dependent on the instructions you give for what you want them to achieve – principally via timing constraints. In the rush to start those timing closure experiments, the block owner, in many cases, defines a minimal set of constraints for synthesis. The backend engineer either takes them at face value or drops them entirely. What then follows is an extended game of ping-pong, where the timing closure expert (who knows the tools but not the design) must repeatedly ask for guidance from the block owner (who knows the design but not how to drive the tools).

Faced with this environment where no single team member is able to do his or her job effectively, there needs to be another way.

Many factors that delay or prevent timing closure can be addressed up-front at the earliest stages of the design flow. A design owner can develop a solid floorplan in the initial stages of RTL development, and significantly reduce timing closure pain later. As a result, issues related to excessively long paths and heavily-loaded, high-fanout cells can be avoided through better up-front planning. And, perhaps most importantly, the design owner can identify and fix such issues up-front, before they start to have an impact on the implementation flow.

And it’s not just about the design itself. Timing constraints are an essential part of driving backend implementation tools in the right direction. The consequences of bad timing constraints come on a kind of sliding scale. At the lower impact end of the scale, constraint values (for example boundary constraints) may not quite be what they should be. This can result in either over- or under-optimized parts of the design. 

At the more dangerous end of the scale are inaccuracies such as incorrect timing exceptions, bad mode setup, etc. These errors will not only result in a bad netlist, but will not be flagged by any of your optimization or timing tools. The consequence can be as catastrophic as a field failure. A variety of solutions are available today from a range of EDA vendors to ensure you have clean and complete timing constraints up-front.  So there’s no need to wait until you are in the midst of timing analysis to catch such issues.

Achieving timing closure in 2011 will be all about certainty – certainty from the earliest stage that your design can indeed meet timing, and that your timing constraints are as good as possible.

 

SoC Timing Closure in 2011 – What is the Recipe for Success?
Bob Smith, VP Marketing, Magma Design Automation

SoC complexity continues to grow rapidly.  Leading edge designs are being produced at 40 nm; a handful of companies are already taping out designs at 28 nm; and the groundwork is being laid today for 20 nm. The 1-billion-gate SoC is just around the corner.  This begs the question: what will it take to reach timing closure on these ever more complex designs?  And how will this impact project schedules, resources, and spending?

The global marketplace is extremely competitive. Semiconductor companies competing in the world market must be able to deliver new and bigger designs in even shorter timeframes and without massive investment in new resources. Achieving timing closure is a pivotal step in completing a new design and getting it ready for hand-off to manufacturing. The recipe for timing closure success is based on three key ingredients.

The first ingredient is to start with a solid foundation. Accurate characterization of libraries for standard cells, memory, I/O, and IP (both digital and analog) are critical to achieving successful timing closure. Memory, in particular needs to be well-characterized as it typically has a dominating effect over the performance of the design as a whole. Without an accurate modeling foundation, timing closure will either be inaccurate or elusive.

Accuracy and speed are critical for characterization, especially at smaller process geometries such as 40 nm and 28 nm. A typical standard cell library might have as many as 5,000 cells or more and require characterization across 20 or more different operating corners. In addition, modeling accuracy typically requires support for advanced composite current source (CCS) noise and power models. Legacy characterization tools have a tough time keeping up with these latest requirements and often suffer from very long run times. A new generation of characterization tool is required that can address these requirements and deliver accurate models for a complete library within a day – not days or weeks.

The second ingredient in the timing closure recipe is a combination of consistency and accuracy in timing analysis across the design flow. What is required is a vertical tool integration that gives the design team a set of consistent tools that can be applied at appropriate points during the design cycle. A simple analogy to this concept can be taken from construction or carpentry.  Rough measurements of length over long distances are taken using a tape measure.  Finer measurements are made using a steel ruler or T-square.  Precise measurements for very small details or parts are made using a caliper or other precision measuring device. In each case, the chosen tool delivers the right level of accuracy for the task.

For SoC implementation, designers need a similar toolbox that offers a set of vertically integrated analysis tools that can be employed as needed throughout the flow. By having this integrated set of consistent tools, designers can make tradeoff decisions as needed between speed and accuracy.  For example, at the beginning of the design flow, some level of timing accuracy can be sacrificed in return for faster throughput. As the design progresses towards the final layout, more accurate timing is required. For tapeout, sign-off-accurate extraction and timing analysis are critical requirements.  Even more accurate Spice analysis may be required to verify critical nets or to resolve subtle timing issues that require beyond-sign-off accuracy.

The third and final ingredient for the recipe is speed. Design teams have an insatiable need for speed in timing analysis. Market competitive pressures are forcing design teams to hold or reduce schedules even while tackling more complex designs. Having to wait days for timing analysis to complete just does not fit in this equation. The problem is compounded because the latest 40nm and, especially, 28nm processes require the analysis of more design corners and operating modes, which, in turn, requires even more computation.  In some cases, design teams will forego a thorough timing analysis in order to save schedule time and focus only on analyzing those corners and combinations of operating modes that are deemed to be most critical – a risky gamble. Another approach (which is quite costly) is the brute-force approach of using multiple licenses and servers in parallel to try and reduce the time to analyze all corners and modes.

What is needed are extraction and timing analysis tools that are architected for both speed and efficient handling of multiple corners and modes. In addition to fast runtimes, these tools must also perform well on a single server (even running multiple corners and modes) and have a modest memory footprint.

A new generation of library characterization, timing signoff and extraction tools will replace today’s legacy tools and drive the efficiency and accuracy needed for SoC timing closure in 2011 and beyond. These tools need to be architected for the tough challenges posed by processes at 40nm and below. These challenges include the need to account for increasing variability and the need to accurately model and optimize the design across many different modes and operating corners.  The combination of a strong foundation built on accurate models, coupled with a vertically integrated set of fast, accurate timing analysis tools architected for the challenges of 40nm, 28nm and below, will deliver the fastest path to SoC timing closure.

Leave a Reply

featured blogs
Mar 5, 2021
The combination of the figure and the moving sky in this diorama -- accompanied by the music -- is really rather tasty. Our cats and I could watch this for hours....
Mar 5, 2021
In February, we continued to build out the content on the website, released a new hierarchy for RF products, and added ways to find Samtec “Reserve” products. Here are the major web updates to Samtec.com for February 2021. Edge Card Content Page Samtec offers a fu...
Mar 5, 2021
Massive machine type communications (mMTC) along with enhanced Mobile Broadband (eMBB) and Ultra Reliable Low Latency Communications (URLLC) represent the three pillars of the 5G initiative defined... [[ Click on the title to access the full blog on the Cadence Community sit...
Mar 5, 2021
Explore what's next in automotive sensors, such as the roles of edge computing & sensor fusion and impact of sensor degradation & software lifecycle management. The post How Sensor Fusion Technology Is Driving Autonomous Cars appeared first on From Silicon To Softw...

featured paper

How to Fast-Charge Your Supercapacitor

Sponsored by Maxim Integrated

Supercapacitors (or ultracapacitors) are suited for short charge and discharge cycles. They require high currents for fast charge as well as a high voltage with a high number in series as shown in two usage cases: an automatic pallet shuttle and a fail-safe backup system. In these and many other cases, the fast charge is provided by a flexible, high-efficiency, high-voltage, and high-current charger based on a synchronous, step-down, supercapacitor charger controller.

Click here to download the whitepaper

Featured Chalk Talk

Electrification of the Vehicle

Sponsored by Mouser Electronics and KEMET

The automotive technology revolution has arrived, and with it - new demands on components for automotive applications. Electric vehicles, ADAS, connected cars, and autonomous driving put fresh demands on our electrical and electronic parts. In this episode of Chalk Talk, Amelia Dalton chats with Nick Stephen of KEMET about components for the next generation of automobiles.

More information about KEMET Electronics ALA7D & ALA8D Snap-In Capacitors