We’ve talked a lot in these pages over the last decade about FPGA design tools. If you’ve been following along in your readers, you know that there has been a big question mark over the commercial EDA industry as far as going after the FPGA design market. The question mark is there for good reason. The two dominant FPGA companies, Xilinx and Altera, have each made huge investments in proprietary FPGA design tools. And, while they have always made a bit of a show of partnering with commercial EDA vendors, they have competed so vigorously (particularly on price) with any third-party EDA attempt to crack into the core of their design flows that most EDA companies have kept a cautious distance from the FPGA market.
It’s easy to understand a lack of enthusiasm for FPGA from the point of view of the big EDA companies. If FPGA vendors are going to provide tools that compete with yours for – nearly free, why would you invest a lot building top-notch FPGA tools? It could be very difficult to recoup your investment. And, the company you’re relying on as a partner to build and test your tools is also your biggest competitor.
That question mark, however, has always had a big asterisk – the company formerly known as Synplicity.
Synplicity was acquired by Synopsys back in 2008, so now they’ve got five years as part of the world’s largest EDA company – and they have most definitely not given up on the FPGA market. Synopsys now markets three key solutions for FPGA design: Synplify (FPGA Synthesis, the solution that started it all), Synphony (high-level synthesis, primarily aimed at DSP and datapath design), and Identify (RTL debugger). For the past decade plus, while the EDA industry as a whole was whining loudly about being shut out of the FPGA design tool market, this group, with this lineup of products, has been steadily earning a nice living providing high-value solutions to FPGA design teams.
The landscape has changed dramatically over the years, however. In the old days, Synplify made its case by providing the best possible synthesis results – at the touch of a button. Before the original Synplify hit the market, FPGA synthesis tools (derived from ASIC synthesis tools) were complex beasts with dozens of tuning options and considerable black art required to get satisfactory results. Synplify offered outstanding results with practically no user intervention required, which quickly won over the large and inexperienced FPGA design community. Over time, Synplify won over even the most advanced FPGA designers as well, with quality-of-results (QoR) that consistently beat the FPGA companies’ own tools. If you wanted the best QoR, plain and simple, you bought Synplify.
As time has passed, however, the two big FPGA companies have poured significant engineering into their own synthesis offerings, and they have narrowed the gap considerably between their results and those of commercial tools like Synplify. This has made the decision to buy third-party tools more difficult for a lot of design teams. Synplify had to move faster to keep a margin in QoR that justified the price tag, but Synopsys had to do more than just offer “better QoR” to keep their solution viable. The days when FPGA vendors’ tools were weak and only suitable for the “easy” designs are over. Now, you can do a decent job of just about any design using the vendor tools alone. In order to convince us to open up our wallets and spring for commercial tools, Synopsys needed more in their corner.
We talked recently with Jeff Garrison of Synopsys about what motivates FPGA design teams to use Synopsys tools. “Of course, QoR is still King,” Garrison explained, “but we see a lot of other factors motivating designers to choose our solutions.”
Key among those factors is design turnaround time. With today’s enormous FPGA designs, a single run of synthesis and place-and-route can take hours. Because of the increased complexity and performance requirements, more design iterations are typically required. Racing more laps around a longer course leads to increased design schedules. Synopsys has attacked this problem at several points. First, they have re-architected their underlying code to be more friendly with multi-core and multi-processor computing environments. This allows the synthesis system to take better advantage of the current generation of high-performance computing platforms. Second, they have significantly improved the incremental and partial design capabilities. When you need to re-synthesize only a small portion of your design to accommodate a change, you shouldn’t have to wait for your synthesis tool to re-do the entire design each time. Incremental capability can dramatically improve the iteration loop time.
Along with support for incremental flows comes significantly improved functionality for teams of designers working on a design. It’s not just the computers that need to be parallelized in order to reduce design cycles. The days when most FPGA designs were completed by a single engineer are over, and design tools had to be re-imagined in order to accommodate the reality of team productivity. Synplify now includes significant hierarchical design capability – allowing teams to do bottom-up, top-down, or middle-out design that matches their work style.
Another handy trick that Synopsys pulled was “Fast Mode.” For most of your early design work, you aren’t hung up on getting the absolute best-possible QoR. You are much more interested in getting your design completed, working, and verified. QoR optimization and fine-tuning can certainly be postponed until you have a reasonably stable, functionally-correct design. For those early synthesis runs, Synopsys has added “Fast Mode” – which requires significantly less runtime at a small QoR penalty. For all those early iterations, you can save yourself a lot of unplanned coffee breaks, and you can wait to unleash the full optimization power of the tool on your later, final synthesis runs.
Speaking of iterations, we’ve all probably run into the scenario where we have a new design, we run it through the tool, and the tool flags some problem with our HDL and stops. We fix the problem, fire up the tool again, and it stops again with the next problem. We continue this iterative loop over and over – with each attempt taking a little longer than the last while the tool gets a little farther with each successive try. To speed up this process, Synopsys has added “Continue on Error” which allows the synthesis tool to keep running – reporting all the errors it finds – rather than shutting down on the first big problem. This can eliminate the need for a lot of those early iterations and get us to the more productive part of our design process sooner.
Better QoR and faster turnaround time aren’t the only reasons to go with third-party tools, however. If you’re designing for higher reliability, commercial solutions like Synplify and Mentor’s Precision Plus bring significant features to the table that are not available in the FPGA vendors’ own tools. If you’re worried about radiation effects like single-event upsets (SEUs – which are now more common even in systems deployed at ground level), these tools provide significant capabilities such as the automation of triple-module redundancy (TMR) design or, alternatively, “Duplicate with Compare” (which achieves many of the benefits of TMR with less overhead). You can also automatically generate safe state machines with optimized encoding to prevent your design from jumping off the rails if a radiation-induced logic fault occurs.
For one group of designers, there is yet another compelling motivator for third-party design tools – support for FPGA-based prototyping. When the FPGA implementation isn’t your “real” design, you’re often trying to take a large SoC design and get it running across several large FPGAs on a prototyping board. This puts a whole new set of demands on your FPGA design tools. You’ve probably got a lot of IP that was not created specifically for FPGA implementations. You may be gating clocks for power conservation in a way that doesn’t work for FPGA designs. You probably would like your synthesis tool to be fairly code-compatible with a tool like Design Compiler. And, you may need to partition a complex design across several FPGAs – handling all of the complexities of mapping IOs between chips and separating the design into right-sized chunks. There is no question that Synopsys (in particular) has put considerably more attention into the needs of prototypers than the designers of the FPGA vendors’ tools.
Convincing management that we need the industrial-strength FPGA tools can be a challenge in today’s environment. Obviously the FPGA companies provide robust and adequate tools for most applications. In order to justify the comparatively large expenditure for commercial EDA tools, it pays to consider the overall scope of your project. If better QoR could allow you to design-in a cheaper FPGA, you’ll recoup the investment easily. If you save a couple months of overall engineering time for your team, you’ve likewise come out ahead. If you get your product to market sooner, or make rather than miss your project deadline, the stakes can be even higher, and the rewards greater. Finally, there is an element of risk-reduction. The behavior of tools in handling complicated designs varies widely. If you happen to have a design that one tool doesn’t like, you can spend significant resources modifying your design to cater to one particular design tool’s whim. If you have a second solution available, you can sometimes blow right through that roadblock and continue your project. A second, independent tool can eliminate an important single point of failure for a project.
While FPGA company tools are constantly improving, it looks like third-party EDA tools will continue to offer compelling value for the foreseeable future. If your company is doing significant FPGA design, and you don’t use any commercial tools, it’s certainly an option worth investigating.
2 thoughts on “Pro-Strength Tools”
Other “natural” reasons for 3rd party tools.
Good article Kevin. A couple of other reasons folk still support 3rd party C to FPGA tools like Impulse C. 1) There is a “natural constituency” in FPGA-as-processor users with low socket sales and high design needs. The semi-houses are incentivized to focus on socket sales. Since tool vendors get paid on design wins, and we sell design services, we’re happy supporting low device volume users. 2) there remains a contingent that wishes to retain the ability to compile to multiple brands of FPGA. It is the minority but there are designers who have figured their way around sole sourced FPGAs. Bottom line, the semi-house tools are good for the industry. We need more users of C to FPGA as a technology. Impulse Accelerated estimates the teams capable of true software/hardware co-design number in the low thousands. The market needs 10’s of thousands of teams to really become mainstream.
I agree with Brian’s comment, there are many firms looking to provide high value electronic solutions but in different areas that those which justify ASIC volumes (and the sophisticated EDA tooling that has emerged for it). Aerospace and networking (e.g., base-stations) are two examples of sophisticated systems which are volume-challenged for ASIC. At Space Codesign, we complement existing FPGA design infrastructure with ESL hardware-software co-design and exploration (or re-validation if your partitioning is fixed but you are updating, say a block or even a processor). Xilinx Zynq users, for example, will often initially respond with “We are using Vivado.” Our response is, “Great!” because we are providing a design creation front-end for FPGA vendor tools like Xilinx Vivado (and soon others) including their C-synthesis (the former AutoESL), as well as other HLS solutions such as Forte’s, Calypto’s and at some point soon, Impulse Accelerated. New programmables that feature powerful processors (e.g., ARM Cortex-A9 dual core) provide solutions for sophisticated SoC-scale systems and the design requirements are thus akin to SoC engineering.