feature article
Subscribe Now

Time for a Change

Mentor Modernizes the ECO

Everyone knows the ECO. It is a classic case of an acronym acting as a euphemism. Reducing a problematic situation to an established process represented by a simple trio of letters diverts attention from the underlying blunder. “Have Susan process Charlie’s ECO before we bring up that final prototype” sounds much, much more palatable than “Thanks to Charlie’s monumental screwup, Susan will have to attach a big red jumper wire that will now gleam like a beacon of stupidity from the back of the board on every one of our first million units.”

As long as we have humans designing electronics, however, engineering change orders (ECOs) will be a reality of electrical engineering life. Few among us can say that we’ve participated in a project that has nailed everything right the first time. There is almost always a jumper wire, a software patch, or a new bitstream for that “glue logic” FPGA that we’re now ever so happy we had the foresight to include on our board. FPGAs have often played the role of the modern-day jumper wire. If the bits came out in the wrong order, if the pinout was messed up on the ASIC, or if there was a timing problem on that input data stream that needed to be fixed, a strategically-placed FPGA could save the day. A few tweaks to the bitstream at the last minute, and a host of horrors could be hidden within the tiny walls of the FPGA’s BGA pack.

Today, however, engineering changes are a-changing. FPGAs have transcended their usual role as “get out of jail free” cards glued in between incompatible components on our board. With ultra-high capacity and capability FPGAs becoming more popular, the FPGA is becoming the center of our system rather than a sidecar. This means that we have a greater than ever degree of flexibility in “tuning and tweaking” (that’s code for “quietly and discreetly fixing our bone-headed boo-boos”) our system designs at the last minute.

The complexity of this flexibility brings on a next-generation difficulty. With automated (that’s code for “completely unpredictable”) tools in critical, often iterative parts of the FPGA design flow, such as, for example, in synthesis and place-and-route, how do we avoid having small engineering changes cause big ripples in our design, sending us back to iteration-hell to re-converge our timing convergence? After spending days to weeks juggling constraints and compilation options to get our design to behave within the boundaries of positive slack, the last thing we want to hear is that the entire design must now be re-synthesized, re-placed and re-routed from scratch because Charlie (there he goes again!) forgot to invert a signal in one line of his tiny, non-critical (you wouldn’t want that guy working on anything important, now, would you?) bit of HDL code.

Getting around that problem requires more than just a subtle change to our typical design flow, however. Remember that worn-out flowchart that you’ve seen at the beginning of 500,000 EDA-vendor presentations? You know, the one that’s just about to disappear from the PowerPoint screen when you sit down with your coffee and croissant just in time for the “meat” of the meeting where they tell you that their new-and-improved, super-expensive, state-of-the-art, bug-laden software tool is absolutely the last possible chance you’ll ever have to defend yourself against Moore’s Law? THAT flowchart slide – the one with a goofy PowerPoint icon representing “you” at the top and a sequence of boxes and arrows leading down the screen through “HDL design,” “Simulation,” “Synthesis,” “Place-and-Route,” all the way to “Big Promotion for little Mr. Icon Man?”

Well, if you’ll notice, there is no information stored in that flowchart. Each step takes some combination of inputs and produces an output. No tool shows a little information cache off to the side where it remembers what it did last time so it can avoid algorithmically re-inventing the wheel, or worse yet, replacing it with a completely different wheel having subtly different input each time it runs. Charlie’s little one line of HDL may cause our entire design to be thrown up in the air and rebuilt from scratch, ditching all of our hard-earned timing tuning from previous runs.

Mentor Graphics thought it would be nice if we didn’t have to fight that battle, so they added a robust ECO capability to the latest version of their FPGA design tools. With team design of FPGAs becoming more commonplace, timing convergence becoming more difficult, and runtimes growing ever longer in the face of larger designs, this capability is likely to be welcomed by high-end FPGA design teams.

Mentor calls this new capability a “Placement Re-use Flow,” and it is designed to preserve (as much as practical) the placement from a previous run when a small change is made to the original HDL design. How small a change? Mentor says the flow works best when at least 85% of the design remains the same. If your changes alter more than 15% of the original placement, they advise that you start afresh.

The key to Mentor’s re-use flow is in the naming of instances. The tools need to be able to identify and recognize an object in your design from a previous run in order to preserve the placement information for that object. Some of the instances in your design were named by you. Those are the easy ones. Other objects (the majority of them) were automatically created and named by the synthesis tool. It is critical to the success of the ECO flow that the same object get the same name on a subsequent run, that no other object accidentally gets that name, and that small changes in design topology do not ripple through the design, changing the name of everything.

Mentor addressed this challenge in the instance-naming algorithms of its Precision Synthesis tool, using a scheme where names are auto-generated based on circuit topology. Each instance is named based on the logic functions of the other instances to which it is connected. If the local topology does not change, the instance name will be the same for subsequent runs, and the tools will be able to identify and preserve the original placement.

To use the placement reuse mechanism, we start in Mentor’s PreciseView design viewer, where we select our top-level block and save it as a macro. This creates a file that contains our current (original) netlist along with the placement and floorplanning information associated with each object. We then modify our design with the engineering change:

— Add Kluge to Invert Charlie’s control signal
U7: INV port map (I => READY, O => READY_NOT);

With that, we re-run logic synthesis, which creates a new gate-level netlist. Next comes the tricky part – we fire up PreciseView again, this time with the new netlist, and tell it we want to “apply” the macro that we previously created from our original design (excluding the netlist information from that macro which represents the old version’s connectivity). We then run “ECO re-placement” and voila! Our existing placements are re-applied to the design intelligently, including possibly “nudging” previous locations in order to get better timing results. The design can now be re-timed using physical synthesis or sent on to final place-and-route where the new and changed bits are merged in.

In general, our experience with these types of flows has heretofore yielded sub-optimal results because of the inflexibility of the original placement in the face of the changed design. Mentor’s approach of building timing-aware optimization into the ECO flow and allowing previous placements to be “nudged” may overcome those typical limitations. The more flexibility that can be built in for timing correction while maintaining the integrity of previous, hard-earned results, the better we can expect the ECO approach to work.

As FPGA designs grow more complex and the size of typical design teams increases, the number and frequency of design changes should be on the rise. This trend will make incremental capabilities like Mentor’s placement re-use/ECO flow more important throughout the FPGA design process. When the FPGA goes from being a small piece of your design to programmable system-on-chip, the ability to manage and control change without starting over will be key. Particularly with Charlie on your team.

Leave a Reply

featured blogs
Jul 20, 2024
If you are looking for great technology-related reads, here are some offerings that I cannot recommend highly enough....

featured video

Larsen & Toubro Builds Data Centers with Effective Cooling Using Cadence Reality DC Design

Sponsored by Cadence Design Systems

Larsen & Toubro built the world’s largest FIFA stadium in Qatar, the world’s tallest statue, and one of the world’s most sophisticated cricket stadiums. Their latest business venture? Designing data centers. Since IT equipment in data centers generates a lot of heat, it’s important to have an efficient and effective cooling system. Learn why, Larsen & Toubro use Cadence Reality DC Design Software for simulation and analysis of the cooling system.

Click here for more information about Cadence Multiphysics System Analysis

featured chalk talk

Extend Coin Cell Battery Life with Nexperia’s Battery Life Booster
Sponsored by Mouser Electronics and Nexperia
In this episode of Chalk Talk, Amelia Dalton and Tom Wolf from Nexperia examine how Nexperia’s Battery Life Booster ICs can not only extend coin cell battery life, but also increase the available power of these batteries and reduce battery overall waste. They also investigate the role that adaptive power optimization plays in these ICs and how you can get started using a Nexperia Battery Life Booster IC in your next design.  
Mar 22, 2024
17,473 views