feature article
Subscribe Now

Time for a Change

Mentor Modernizes the ECO

Everyone knows the ECO. It is a classic case of an acronym acting as a euphemism. Reducing a problematic situation to an established process represented by a simple trio of letters diverts attention from the underlying blunder. “Have Susan process Charlie’s ECO before we bring up that final prototype” sounds much, much more palatable than “Thanks to Charlie’s monumental screwup, Susan will have to attach a big red jumper wire that will now gleam like a beacon of stupidity from the back of the board on every one of our first million units.”

As long as we have humans designing electronics, however, engineering change orders (ECOs) will be a reality of electrical engineering life. Few among us can say that we’ve participated in a project that has nailed everything right the first time. There is almost always a jumper wire, a software patch, or a new bitstream for that “glue logic” FPGA that we’re now ever so happy we had the foresight to include on our board. FPGAs have often played the role of the modern-day jumper wire. If the bits came out in the wrong order, if the pinout was messed up on the ASIC, or if there was a timing problem on that input data stream that needed to be fixed, a strategically-placed FPGA could save the day. A few tweaks to the bitstream at the last minute, and a host of horrors could be hidden within the tiny walls of the FPGA’s BGA pack.

Today, however, engineering changes are a-changing. FPGAs have transcended their usual role as “get out of jail free” cards glued in between incompatible components on our board. With ultra-high capacity and capability FPGAs becoming more popular, the FPGA is becoming the center of our system rather than a sidecar. This means that we have a greater than ever degree of flexibility in “tuning and tweaking” (that’s code for “quietly and discreetly fixing our bone-headed boo-boos”) our system designs at the last minute.

The complexity of this flexibility brings on a next-generation difficulty. With automated (that’s code for “completely unpredictable”) tools in critical, often iterative parts of the FPGA design flow, such as, for example, in synthesis and place-and-route, how do we avoid having small engineering changes cause big ripples in our design, sending us back to iteration-hell to re-converge our timing convergence? After spending days to weeks juggling constraints and compilation options to get our design to behave within the boundaries of positive slack, the last thing we want to hear is that the entire design must now be re-synthesized, re-placed and re-routed from scratch because Charlie (there he goes again!) forgot to invert a signal in one line of his tiny, non-critical (you wouldn’t want that guy working on anything important, now, would you?) bit of HDL code.

Getting around that problem requires more than just a subtle change to our typical design flow, however. Remember that worn-out flowchart that you’ve seen at the beginning of 500,000 EDA-vendor presentations? You know, the one that’s just about to disappear from the PowerPoint screen when you sit down with your coffee and croissant just in time for the “meat” of the meeting where they tell you that their new-and-improved, super-expensive, state-of-the-art, bug-laden software tool is absolutely the last possible chance you’ll ever have to defend yourself against Moore’s Law? THAT flowchart slide – the one with a goofy PowerPoint icon representing “you” at the top and a sequence of boxes and arrows leading down the screen through “HDL design,” “Simulation,” “Synthesis,” “Place-and-Route,” all the way to “Big Promotion for little Mr. Icon Man?”

Well, if you’ll notice, there is no information stored in that flowchart. Each step takes some combination of inputs and produces an output. No tool shows a little information cache off to the side where it remembers what it did last time so it can avoid algorithmically re-inventing the wheel, or worse yet, replacing it with a completely different wheel having subtly different input each time it runs. Charlie’s little one line of HDL may cause our entire design to be thrown up in the air and rebuilt from scratch, ditching all of our hard-earned timing tuning from previous runs.

Mentor Graphics thought it would be nice if we didn’t have to fight that battle, so they added a robust ECO capability to the latest version of their FPGA design tools. With team design of FPGAs becoming more commonplace, timing convergence becoming more difficult, and runtimes growing ever longer in the face of larger designs, this capability is likely to be welcomed by high-end FPGA design teams.

Mentor calls this new capability a “Placement Re-use Flow,” and it is designed to preserve (as much as practical) the placement from a previous run when a small change is made to the original HDL design. How small a change? Mentor says the flow works best when at least 85% of the design remains the same. If your changes alter more than 15% of the original placement, they advise that you start afresh.

The key to Mentor’s re-use flow is in the naming of instances. The tools need to be able to identify and recognize an object in your design from a previous run in order to preserve the placement information for that object. Some of the instances in your design were named by you. Those are the easy ones. Other objects (the majority of them) were automatically created and named by the synthesis tool. It is critical to the success of the ECO flow that the same object get the same name on a subsequent run, that no other object accidentally gets that name, and that small changes in design topology do not ripple through the design, changing the name of everything.

Mentor addressed this challenge in the instance-naming algorithms of its Precision Synthesis tool, using a scheme where names are auto-generated based on circuit topology. Each instance is named based on the logic functions of the other instances to which it is connected. If the local topology does not change, the instance name will be the same for subsequent runs, and the tools will be able to identify and preserve the original placement.

To use the placement reuse mechanism, we start in Mentor’s PreciseView design viewer, where we select our top-level block and save it as a macro. This creates a file that contains our current (original) netlist along with the placement and floorplanning information associated with each object. We then modify our design with the engineering change:

— Add Kluge to Invert Charlie’s control signal
U7: INV port map (I => READY, O => READY_NOT);

With that, we re-run logic synthesis, which creates a new gate-level netlist. Next comes the tricky part – we fire up PreciseView again, this time with the new netlist, and tell it we want to “apply” the macro that we previously created from our original design (excluding the netlist information from that macro which represents the old version’s connectivity). We then run “ECO re-placement” and voila! Our existing placements are re-applied to the design intelligently, including possibly “nudging” previous locations in order to get better timing results. The design can now be re-timed using physical synthesis or sent on to final place-and-route where the new and changed bits are merged in.

In general, our experience with these types of flows has heretofore yielded sub-optimal results because of the inflexibility of the original placement in the face of the changed design. Mentor’s approach of building timing-aware optimization into the ECO flow and allowing previous placements to be “nudged” may overcome those typical limitations. The more flexibility that can be built in for timing correction while maintaining the integrity of previous, hard-earned results, the better we can expect the ECO approach to work.

As FPGA designs grow more complex and the size of typical design teams increases, the number and frequency of design changes should be on the rise. This trend will make incremental capabilities like Mentor’s placement re-use/ECO flow more important throughout the FPGA design process. When the FPGA goes from being a small piece of your design to programmable system-on-chip, the ability to manage and control change without starting over will be key. Particularly with Charlie on your team.

Leave a Reply

featured blogs
Nov 30, 2023
No one wants to waste unnecessary time in the model creation phase when using a modeling software. Rather than expect users to spend time trawling for published data and tediously model equipment items one by one from scratch, modeling software tends to include pre-configured...
Nov 27, 2023
See how we're harnessing generative AI throughout our suite of EDA tools with Synopsys.AI Copilot, the world's first GenAI capability for chip design.The post Meet Synopsys.ai Copilot, Industry's First GenAI Capability for Chip Design appeared first on Chip Design....
Nov 6, 2023
Suffice it to say that everyone and everything in these images was shot in-camera underwater, and that the results truly are haunting....

featured video

Dramatically Improve PPA and Productivity with Generative AI

Sponsored by Cadence Design Systems

Discover how you can quickly optimize flows for many blocks concurrently and use that knowledge for your next design. The Cadence Cerebrus Intelligent Chip Explorer is a revolutionary, AI-driven, automated approach to chip design flow optimization. Block engineers specify the design goals, and generative AI features within Cadence Cerebrus Explorer will intelligently optimize the design to meet the power, performance, and area (PPA) goals in a completely automated way.

Click here for more information

featured webinar

Rapid Learning: Purpose-Built MCU Software Tools for Data-Driven Embedded IoT Systems

Sponsored by ITTIA

Are you developing an MCU application that captures data of all kinds (metrics, events, logs, traces, etc.)? Are you ready to reduce the difficulties and complications involved in developing an event- and data-centric embedded system? This webinar will quickly introduce you to excellent MCU-specific software options for developing your next-generation data-driven IoT systems. You will also learn how to recognize and overcome data management obstacles. Register today as seats are limited!

Register Now!

featured chalk talk

What are the Differences Between an Integrated ADC and a Standalone ADC?
Sponsored by Mouser Electronics and Microchip
Many designs today require some form of analog to digital conversion but how you implement an ADC into your design can make a big difference when it comes to accuracy and precision. In this episode of Chalk Talk, Iman Chalabi from Microchip and Amelia Dalton investigate the benefits of both integrated ADC solutions and standalone ADCs. They discuss the roles that internal switching noise, process technology, and design complexity play when choosing the right ADC solution for your next design.
Apr 17, 2023
27,039 views