editor's blog
Subscribe Now

20-nm Test Enhancements

ITC is usually the time when the EDA companies announce their coolest test-related advances. While Mentor announced their IJTAG support, Synopsys focused its agenda largely on the issues surrounding the 20-nm node. Each node has its particular failure modes, and tests need to be added or refocused to catch those failures.

Two of the advances they announced involved memory and multicore; we’ll take them in order.

They first announced a change to their STAR memory system, both adding and removing hierarchy. The architecture of their memory test has been made hierarchical, with an SMS Server at the top that is connected to one or more chains of SMS Processors. Each processor handles several individual memory blocks. Cache and other high-speed memory associated with higher-end cores can also be mapped to a test bus that is managed by an SMS Processor.

Where hierarchy was taken away was in the wrapping of the memory blocks. Regardless of the type of memory, there’s a wrapper to interface it to the SMS Processor. But a true wrapper adds a level of hierarchy, and this can wreak havoc with constraints and such. So what they’ve done is keep the wrapper at the same hierarchical level as the memory. Which makes it more of a shim than a wrapper.

On the multicore side of things, they have shared pins to allow concurrent testing of multiple cores. Each core has its own internal test compression, and if all of the cores are identical, then ATPG can create a set of patterns that all cores can test concurrently. If the cores aren’t identical (but similar), then the ATPG handles one of the cores, and then goes to the other cores to see what was fortuitously covered by the vectors already created; it can then create supplementary vectors to patch any other coverage holes. Those extra vectors will have no impact on the cores already fully covered.

Of course, this raises the question, if you’re testing these all in parallel and one fails, how will you know which one? They have more than one output, and by looking at the outputs along with the patterns, they can positively ID where the issue was.

This sharing of the test pins (note that it’s not muxing the pins, it’s literally sharing) reduces both the test time and the number of pins required.

These are some of the highlights of what they announced; you can find more in their release.

Leave a Reply

featured blogs
Jul 10, 2020
[From the last episode: We looked at the convolution that defines the CNNs that are so popular for machine vision applications.] This week we'€™re going to do some more math, although, in this case, it won'€™t be as obscure and bizarre as convolution '€“ and yet we will...
Jul 10, 2020
I need a problem that lends itself to being solved using a genetic algorithm; also, one whose evolving results can be displayed on my 12 x 12 ping pong ball array....
Jul 9, 2020
It happens all the time. We'€™re online with a designer and we'€™re looking at a connector in our picture search. He says '€œI need a connector that looks just like this one, but '€¦'€ and then he goes on to explain something he needs that'€™s unique to his desig...

Featured Video

Product Update: New DesignWare® IOs

Sponsored by Synopsys

Join Faisal Goriawalla for an update on Synopsys’ DesignWare GPIO and Specialty IO IP, including LVDS, I2C and I3C. The IO portfolio is silicon-proven across a range of foundries and process nodes, and is ready for your next SoC design.

Click here for more information about DesignWare Embedded Memories, Logic Libraries and Test Videos

Featured Chalk Talk

TensorFlow to RTL with High-Level Synthesis

Sponsored by Cadence Design Systems

Bridging the gap from the AI and data science world to the RTL and hardware design world can be challenging. High-level synthesis (HLS) can provide a mechanism to get from AI frameworks like TensorFlow into synthesizable RTL, enabling the development of high-performance inference architectures. In this episode of Chalk Talk, Amelia Dalton chats with Dave Apte of Cadence Design Systems about doing AI design with HLS.

More information