feature article
Subscribe Now

Pumping Up Precision

Mentor Upgrades Synthesis

Perhaps the most under-appreciated technologies in the FPGA design flow are logic synthesis and physical layout.  Most of us download the vendors’ tools, grab some IP, whip up a little RTL, push the button, and wait around for our timing report to say “Pass”.  After all, that’s the easiest thing to do, and following the path of least resistance is highly attractive when so many tasks on the critical path of our project are prohibitively difficult.

We have basically no choice on layout tools.  Unlike in the ASIC world, we can use only the layout tools provided by the FPGA vendor we select.  Synthesis, however, is another story.  There is a wide variety of synthesis tools available, and your choice of synthesis tool (along with your level of expertise using it) is probably one of the most important factors in the ultimate performance of your design.  Synthesis technology varies widely in terms of quality of results (QoR), and those results affect everything from your design performance to power consumption to the size and cost of the FPGA device required to implement your design. 

While FPGA vendors’ tools have improved significantly in recent years, one has to ask– “Does my FPGA vendor care if my design gets squeezed into the smallest, cheapest, lowest-speed-grade part?”  Probably not.  It might actually be to their advantage if you ended up with a larger or faster (read – more expensive) part than you might have otherwise required, as long as you never knew the difference.  The people making third-party synthesis tools, however, have extreme motivation to give you top-notch quality of results.

Any third-party supplier of FPGA synthesis tools faces a tough challenge.  FPGA vendors give away competing products essentially for free if you’re buying their silicon.  Selling third-party FPGA synthesis tools is like selling aftermarket wheels for new cars that already come with good ones. In order for EDA companies marketing competing third-party tools to make any money, they have to demonstrate compelling value in order to win you over from the free tools already included in your design kit. 

Luckily, they typically do.

The basic value proposition always boils down to two basic elements – quality of results and vendor independence.  In reality, either of these arguments should be sufficient.  If your project is doing any volume at all or has any sensitivity to time-to-market, BOM cost, or performance on the FPGA component, you’re silly not to give careful attention to third-party synthesis tools. 

Consider this – if a synthesis tool could gain you 10-30% in Fmax performance (or positive timing slack), you could go down a full speed grade in the FPGA required for your design.  That, in turn, could save you around 10-30% in the cost of FPGAs for production.  The 10-30% range, it turns out, is entirely feasible with the right synthesis technology, a little experience using the tools, and a little luck in your design.

Also, consider that if you could optimize area and slide into a smaller FPGA, that would take another big cut out of your BOM cost.  Other follow-on effects of better QoR can also save you money.  If, for example, you could save a week of iterating through synthesis and place-and-route by hitting your timing the first or second time, the company saves a big chunk on your salary (even if that’s not the kind of economy you generally try to promote). 

The combination of these factors can easily justify the purchase price of third-party synthesis tools many times over.  In fact, EDA companies could probably successfully sell synthesis based on QoR alone, without even bringing up the vendor-independence issue…

But they don’t.

So, what’s up with the vendor-independence argument?  Well, the biggest factor in my book is training and support of multiple tools for different projects or different parts of the same project.  It’s quite normal these days to see FPGAs from two or more different vendors on the same board.  Do you want to have two completely different tool chains for your FPGA team to do these designs?  While we agree that the layout portion must be vendor-specific, it’s much simpler to manage if the most commonly iterated portions of the design flow (simulation and synthesis) can be on the same software – therefore third-party synthesis.  QED.

The quest for true vendor independence for your design goes beyond the synthesis tool, however.  If you’re planning to realize the utopian vision of moving your design seamlessly back and forth between, say, Xilinx and Altera (I’m pretty sure virtual lightning bolts will strike this paragraph as you read, alternately blocking out the two names – just refresh your browser if you have a problem…), you need more than just a vendor-independent synthesis tool.  You’ll need to be sure that you avoid the lure of “free” (what we typically call “sticky”) IP.  This is difficult, of course.  It’s tempting to download the FPGA vendor’s free super-whizzo interface block and drop the HDL into your design, ignoring the fact that you’ve just permanently locked your design into a single FPGA vendor.  Later, when another vendor releases their new platform with 30% better performance or 50% lower cost, you can’t migrate because your design is laden with “sticky” IP, and it would be prohibitively difficult (and expensive) to convert it over.

There are two EDA companies that are seriously competing in the FPGA synthesis game – Synplicity and Mentor Graphics.  While the competition between these two is definitely not up to the Xilinx versus Altera frenzy, they do spend a good deal of effort lobbing leadership claims back and forth against each other.  In truth, they are somewhat united in their quest to compete against the very popular tools released by the FPGA vendors themselves – forcing a kind of “co-opetition” between the EDA and FPGA companies as they simultaneously cooperate on technical development and compete for design tool seats.

This week, Mentor Graphics announced some major upgrades to their popular “Precision” synthesis family, showing that they are still in the FPGA fight for real.  Precision has been around for a number of years and has thousands of installed seats, but this latest announcement represents probably the largest single upgrade to the synthesis tool suite since it was introduced about five years ago.  The announcement is for a new version of the product, dubbed “Precision Plus”, that includes new “physical synthesis” capabilities.  Why is “physical synthesis” in quotes?  Because there are about fifty different definitions for physical synthesis floating around, and one needs to always understand what technology is behind the curtain when the term is used.

In Mentor’s case, the term is about as close as you can get to full-boat ASIC-grade physical synthesis without doing the one thing you can’t do in FPGA – integrating logic synthesis with place-and-route.  The need for physical synthesis comes from the fact that creating logic that meets tight timing constraints is almost impossible working exclusively in the logical domain.  With modern silicon technologies, the majority of the delay in any path is not from the logic gates themselves, but from the interconnect (wiring) between the gates (or LUTs, or cells, or…) Since layout has not yet been completed at logic synthesis time, the best one can normally do is throw out a statistical estimate of the wiring delay based on something like the fanout of the net.  These estimates tend to be in the plus-or-minus 30% range – not so good if you’re trying to design for tolerances of 5-10% in order to meet timing. 

The inaccuracy of timing estimates in the logic domain are normally used to construct the initial netlist, and then the netlist is thrown over the wall to layout, which tries to create a placement of the given netlist that will meet timing.  Layout certainly can do a lot to correct timing problems that started in logic synthesis, but wouldn’t it be nice if logic synthesis were in on the game and could help out as well?  For example, logic synthesis could sometimes replicate blocks of critical-path logic so that the outputs could be near multiple outputs that require the data.  This replication also reduces fanout, further improving performance.  Logic synthesis could also move some portions of a combinational logic cloud to the other side of a register, retiming the path so that there are more balanced combinational delays on each side of the register.  Logic synthesis could also re-structure combinational logic paths so that less logic was in the critical path and more was in the non-critical paths.  These techniques – called “replication, re-timing, and re-structuring” — are among the high-value optimizations that can be performed when logic synthesis cooperates with layout – resulting in “physical synthesis.”

So – how does “physical synthesis” occur when placement and routing are delivered by one company and logic synthesis by another?  Verrrry carefully! Mentor’s approach is to have accurate models of the physical resources available in the target FPGA and to structure the logic of the design with significant knowledge of what’s expected from the place-and-route tools.  Mentor calls this process “Physically Aware Synthesis”, and they differentiate it from the closed-loop design flow used in their Precision Physical tools (although Precision Physical has also been updated with the new Physically Aware capabilities.) 

Mentor claims that the new technology gains an average of 10% on Fmax with a “typical” range of 5-40% improvement over their previous tool.  They’ve gone a different route from competitor Synplicity by going for breadth of support – Mentor says that the new tool supports 19 FPGA device families out of the gate, including offerings from Actel, Altera, Lattice, and Xilinx.  Mentor has also taken the “pushbutton” approach with the new physical features, working to make it so you don’t have to be a samurai synthesis guru to get benefits from the new capabilities.

A second major area of improvement in Precision Plus is pushbutton incremental synthesis.  When you’re iterating your design and re-running synthesis and place-and-route with each iteration, long runtimes can significantly impact your productivity.  Mentor’s new tool analyzes the changes in your design and re-synthesizes only the changed portions, giving significant runtime savings and also helping to prevent the QoR oscillations from re-synthesizing the whole netlist just to accommodate a small incremental change.  The incremental synthesis works either in fully-automatic mode or in “partition-based” mode, where the tool takes advantage of partitions created by your design team – often to carve off sections of the design for different team members.  Mentor’s tool also works in conjunction with incremental place-and-route from both Xilinx and Altera so that both synthesis and place-and-route are performed incrementally.  This gives even more runtime savings during iterative design work. 

Mentor says that incremental synthesis can give up to a 6X runtime advantage on synthesis, and more when place-and-route are also included.  In addition, they point out that in partition-based mode, the unchanged blocks have 100% predictability, so your critical timing paths aren’t likely to move around in “completed” sections of your design as you’re working in new territory.

The final new enhancement to Precision Plus is what Mentor calls a Resource Manager.  This handy tool identifies the hard-IP blocks available in your FPGA (such as block RAM, DSP/multiplier blocks, etc.) and allows you to manually allocate and map those blocks to various partitions – to allocate resources for different team members, for example.  This provides a level of control that is much more efficient than attempting to carve off contiguous sections of FPGA real estate into a block that includes just the right amount of hard-IP resources for a particular design partition.

Mentor has already completed beta testing, and the new Precision Plus is available immediately.  Pricing is, well, hard to nail down – as with any FPGA-related product.  It appears, however, that the new Precision Plus is priced close enough to the old Precision that most buyers will opt for the Plus version.  Precision Physical, however, maintains a pricing delta that will probably keep it reserved for those that want to master an additional level of control over the physical design process and reap the additional QoR rewards offered by that path.

Leave a Reply

featured blogs
May 24, 2024
Could these creepy crawly robo-critters be the first step on a slippery road to a robot uprising coupled with an insect uprising?...
May 23, 2024
We're investing in semiconductor workforce development programs in Latin America, including government and academic partnerships to foster engineering talent.The post Building the Semiconductor Workforce in Latin America appeared first on Chip Design....

featured paper

Achieve Greater Design Flexibility and Reduce Costs with Chiplets

Sponsored by Keysight

Chiplets are a new way to build a system-on-chips (SoCs) to improve yields and reduce costs. It partitions the chip into discrete elements and connects them with a standardized interface, enabling designers to meet performance, efficiency, power, size, and cost challenges in the 5 / 6G, artificial intelligence (AI), and virtual reality (VR) era. This white paper will discuss the shift to chiplet adoption and Keysight EDA's implementation of the communication standard (UCIe) into the Keysight Advanced Design System (ADS).

Dive into the technical details – download now.

featured chalk talk

Peak Power Introduction and Solutions
Sponsored by Mouser Electronics and MEAN WELL
In this episode of Chalk Talk, Amelia Dalton and Karim Bheiry from MEAN WELL explore why motors and capacitors need peak current during startup, the parameters to keep in mind when choosing your next power supply for these kind of designs, and the specific applications where MEAN WELL’s enclosed power supplies with peak power would bring the most benefit.
Jan 22, 2024