editor's blog
Subscribe Now

20-nm Test Enhancements

ITC is usually the time when the EDA companies announce their coolest test-related advances. While Mentor announced their IJTAG support, Synopsys focused its agenda largely on the issues surrounding the 20-nm node. Each node has its particular failure modes, and tests need to be added or refocused to catch those failures.

Two of the advances they announced involved memory and multicore; we’ll take them in order.

They first announced a change to their STAR memory system, both adding and removing hierarchy. The architecture of their memory test has been made hierarchical, with an SMS Server at the top that is connected to one or more chains of SMS Processors. Each processor handles several individual memory blocks. Cache and other high-speed memory associated with higher-end cores can also be mapped to a test bus that is managed by an SMS Processor.

Where hierarchy was taken away was in the wrapping of the memory blocks. Regardless of the type of memory, there’s a wrapper to interface it to the SMS Processor. But a true wrapper adds a level of hierarchy, and this can wreak havoc with constraints and such. So what they’ve done is keep the wrapper at the same hierarchical level as the memory. Which makes it more of a shim than a wrapper.

On the multicore side of things, they have shared pins to allow concurrent testing of multiple cores. Each core has its own internal test compression, and if all of the cores are identical, then ATPG can create a set of patterns that all cores can test concurrently. If the cores aren’t identical (but similar), then the ATPG handles one of the cores, and then goes to the other cores to see what was fortuitously covered by the vectors already created; it can then create supplementary vectors to patch any other coverage holes. Those extra vectors will have no impact on the cores already fully covered.

Of course, this raises the question, if you’re testing these all in parallel and one fails, how will you know which one? They have more than one output, and by looking at the outputs along with the patterns, they can positively ID where the issue was.

This sharing of the test pins (note that it’s not muxing the pins, it’s literally sharing) reduces both the test time and the number of pins required.

These are some of the highlights of what they announced; you can find more in their release.

Leave a Reply

featured blogs
Jan 21, 2022
Here are a few teasers for what you'll find in this week's round-up of CFD news and notes. How AI can be trained to identify more objects than are in its learning dataset. Will GPUs really... [[ Click on the title to access the full blog on the Cadence Community si...
Jan 20, 2022
High performance computing continues to expand & evolve; our team shares their 2022 HPC predictions including new HPC applications and processor architectures. The post The Future of High-Performance Computing (HPC): Key Predictions for 2022 appeared first on From Silico...
Jan 20, 2022
As Josh Wardle famously said about his creation: "It's not trying to do anything shady with your data or your eyeballs ... It's just a game that's fun.'...

featured video

Synopsys & Samtec: Successful 112G PAM-4 System Interoperability

Sponsored by Synopsys

This Supercomputing Conference demo shows a seamless interoperability between Synopsys' DesignWare 112G Ethernet PHY IP and Samtec's NovaRay IO and cable assembly. The demo shows excellent performance, BER at 1e-08 and total insertion loss of 37dB. Synopsys and Samtec are enabling the industry with a complete 112G PAM-4 system, which is essential for high-performance computing.

Click here for more information about DesignWare Ethernet IP Solutions

featured paper

Enhancing PSAP Audio Performance and Power Efficiency in Hearables with Anti-Noise

Sponsored by Analog Devices

PSAP enhances user's listening experiences with hearables in challenging environments. Long delay in the audio system creates distortion known as comb effect in PSAP. This paper investigates the root cause of the comb effect and explains how a new anti-noise device yields a superior system performance compared to conventional PSAP solutions.

Click here to read more

featured chalk talk

Machine-Learning Optimized Chip Design -- Cadence Design Systems

Sponsored by Cadence Design Systems

New applications and technology are driving demand for even more compute and functionality in the devices we use every day. System on chip (SoC) designs are quickly migrating to new process nodes, and rapidly growing in size and complexity. In this episode of Chalk Talk, Amelia Dalton chats with Rod Metcalfe about how machine learning combined with distributed computing offers new capabilities to automate and scale RTL to GDS chip implementation flows, enabling design teams to support more, and increasingly complex, SoC projects.

Click here for more information about Cerebrus Intelligent Chip Explorer