feature article
Subscribe Now

Kind Of A Big Deal

Xilinx Rebuilds Tools - From Scratch

Let’s just start by saying that this is really a big deal.

I could come up with a lot of impressive numbers and comparisons to dazzle you with the size of the project Xilinx just publicly disclosed (although it’s been one of the worst-kept secrets in the FPGA market). In fact, Xilinx offered some sound bites to us right away – like “500 man-years of engineering effort.”

But that just doesn’t even begin to capture the scope of it.

Before we dive into the details of the new Vivado Design Suite that Xilinx just announced – which represents a ground-up re-design of their entire FPGA tool chain – let’s rewind a few years and take a look at how we got here. Without the historical perspective of the evolution of a very large software tool suite, it’s impossible to grasp the magnitude and implications of what Xilinx has just done.

Xilinx’s current production tool suite, ISE, is really a combination of dozens of tools developed and acquired over a period of over twenty years. The core of the current tool suite – Xilinx’s place-and-route software, comes from a company called NeoCAD that they acquired back in 1995. Since then, the FPGAs we are all trying to design with that software have grown by a factor of over 200. Obviously the algorithms, data models, and infrastructure that worked back then didn’t just keep humming along nicely as the target designs grew 200 times larger and more complex. Xilinx had to evolve them over the years – a lot. There has been a large engineering team charged with maintaining and enhancing that piece of the tool suite that entire time, and probably nobody knows how much of the original code (if any) is still left.

Xilinx’s ISE synthesis software (also known as XST) came from a company called MINC that they acquired in 1998. MINC had acquired the software from a French company called IST back in 1996. Again, the logic synthesis software that barely worked on 1995 FPGAs would have no prayer of producing reasonable results on today’s designs if not for years of dedicated work by a team of engineers.

However, evolving these technologies and these code bases separately over a period of years starts to expose flaws in the underlying architectures and data models. You can change things one piece at a time, but you’re always hamstrung by history. As you improve the software, you have to maintain stability release-to-release, maintain backward compatibility, and preserve elements of the UI that customers have come to depend on. You can never do a “clean slate” data model re-work. Piles of enhancements that you’d like to add are accumulated in a never-to-be-completed “someday when we can re-do the data model” list.

In the examples of place-and-route and synthesis, the problem got worse. Several years ago, it became really important for these two technologies to share information. Synthesis cannot do a reasonable job of timing optimization without some knowledge of at least how the design will be placed. Placement also cannot proceed without a high-quality netlist from synthesis. Continuing a software architecture where these two tools have disparate data models and are only loosely connected by a series of text files passed back and forth limits the power, capacity, speed, and functionality of the most critical part of FPGA implementation.

While place-and-route and synthesis are just two examples, they are typical of what happens when you have a collection of tools from different sources trying to do one job: complete a successful FPGA design. Just to make sure your picture is right, however, build in your head a constellation of tools that includes not only synthesis and place-and-route, but HDL simulation, design planning/floorplanning (from PlanAhead, acquired from HierDesign in 2004), high-level synthesis (acquired from AutoESL in early 2011), DSP design (acquired from AccelChip in 2006) and others. The list goes on and on. There are embedded software design tools, platform building tools, high-performance computing tools, IP builders, embedded debuggers… If you drew a graph of all the tools in the Xilinx system with arrows going back and forth between all of the components that need to communicate design data with each other, you’d probably need to go camp on a quiet mountaintop for a while when you finished – just to unload your brain.

It eventually becomes untenable for a design tool team to maintain and evolve the existing code base on the existing architecture. Your code starts to be patches on top of patches on top of patches. Your interfaces get cluttered with text-based design files and side files being passed from tool to tool. You eventually have no choice but to bite the bullet and go for a complete re-design of your entire tool suite from the ground up with new data models, consistent user interfaces, connections between levels of abstraction in the design, and, eventually, all those capabilities that you have spent years wanting to add but could not – because of the legacy architecture of the system.

When that day comes, it may be the most exciting and terrifying thing in the world for a software development team.

About four years ago, Xilinx realized that they had reached that breaking point – where it would soon no longer be feasible to keep up with their silicon evolution on the existing tool base. They needed a new set of tools built on a new architecture. The project, of course, would be monumental – spanning years of software development with hundreds of engineers. The new system would need to have all of the capability of the existing tool suite and more. It would need to be extensible for the future. It would have to rapidly achieve the kind of stability that generally can come only from years of use by a diverse customer base. At the same time, the company would need to maintain, evolve, and even enhance the existing tools. No fair taking a four- or five-year break from supporting your customers while you work on the “new thing.”

By any reasonable standard, this was an impossible task.

Xilinx apparently went for it anyway.

This week, Xilinx announced Vivado – their newly re-architected complete FPGA design tool suite. Vivado is smartly designed with clear learnings from current state-of-the-art EDA technology. It is based on a shared, unified data model that maintains all of the various design abstractions concurrently. That means that tools are not required to pass enormous text-based design files back and forth. It also means that one can cross-probe between various representations of the design – from LUTs to HDL to waveforms and high-level language constructs to placement and routing – which is an immense help in debugging.

Xilinx built the user interface on top of a Tcl-scripting environment that allows just about everything that the tools can do to be controlled by Tcl scripts. You could even write your own GUI with it (although that would be a fairly pointless exercise, since Xilinx already provides a nice one). The Tcl scripting also provides a nice way for third-party tools to integrate with Vivado and for users to develop their own utilities and scripts to automate their particular design flows and extract critical information from their designs. The level of control and visibility the Tcl interface gives in Vivado is impressive, and it’s hard to imagine anything you’d want to undertake as a designer that would not be supported by this Tcl implementation.

Of course, a complete overhaul gives the development team the opportunity to make all those once-in-a-career performance and capacity optimizations. As a result – Vivado is dramatically faster than the current-generation ISE tools. It’s a good thing, too, because the current largest Xilinx Virtex-7 devices are apparently too much for ISE. Although Xilinx didn’t say it in so many words, they have directed all of the customers using their new V2000T device (with 2 million equivalent LUTs) to be beta testers of Vivado (which was code-named “Rodin” by the way). We don’t know what the run time of a 2 million LUT design on ISE would be, but we suspect it would not be pretty. Vivado apparently takes it in stride. Xilinx claims that Vivado is around 4x the speed of its current-generation ISE tools, and the new synthesis engine includes a “quick” mode that runs at about 15x speed – for when you want a rough idea of the area and performance your design will be able to achieve.

Behind those improvements are a completely new logic synthesis engine based on the new data model, a new timing engine that (FINALLY) uses Synopsys SDC design constraint language (allowing Xilinx to join the rest of the world with timing constraint standardization), and – probably most interestingly – a brand-new placer with a multi-dimensional analytic placement algorithm. In the past, Xilinx (like most other placement tools written in the 1990s) used the “simulated annealing” algorithm for placement. Simulated annealing did just what the name says. It performed some directed improvement in a placement, then semi-randomly perturbed the placement to overcome local minima, then optimized that placement – iterating on that whole process until both the user and the computer dropped dead from exhaustion. As you might guess, such an algorithm is not ideal with today’s mega-designs, as we might easily reach retirement before our FPGA was satisfactorily placed. The new Vivado algorithm is on a par with the current generation of high-end ASIC placers. It takes multiple goals into consideration and uses a deterministic algorithm to generate a placement that optimizes for the multiple goals – wire length, routability, etc. Xilinx claims that the new placer is significantly faster and generates much better results than the legacy simulated annealing placer.

Better placement is one of those things with benefits that reverberate up and down the tool chain. Better placement leads to better (and faster) routing. Better, shorter routes lead to easier timing optimization. All of those steps running faster allows more iteration, which results in even better designs. It’s a feedback loop where all the edges are good. The net result is better designs in less time. Xilinx claims that the Vivado architecture is designed to be scalable to at least 100 million gate (ASIC equivalent) designs. That should get us through the next couple years at least.

Vivado seems to be well loaded for the future. As our designs grow more complex, we’ll be writing only a tiny fraction of them in new, original hardware description languages (HDLs). Unless we want to spend the rest of our career developing the HDL for our next 2 million LUT FPGA, most of our design will need to come from other sources such as IP – both from third parties and from our own design re-use efforts. Xilinx has built the entire design flow of Vivado around IP-based design and design re-use. This IP-centric process is a dramatic departure from the base assumption of the older-generation tools (which was that we were creating a new design each time from scratch).

Xilinx added a plethora of features for the creation, packaging, distribution, and re-use of IP blocks. In a noticeable break with past behaviors, Xilinx went full-on with standards in Vivado. The IP-centric architecture includes AMBA AXI4 as the primary interconnect standard for reusable IP blocks and IP-XACT standard for packaging and metadata for IP. There are tools to help IP providers package and distribute their wares – including standards to help with validation and IP protection. A new “IP Packager” can basically turn any design into a reusable IP core. The provider can choose the level of abstraction for the IP – from HDL all the way down to pre-placed-and-routed blocks.

On the IP consumer side, the new “IP Integrator” makes it easy to drag-and-drop IP blocks – which are connected at the interface level rather than the net level. This makes design a lot simpler and less error-prone, particularly when connecting up IP blocks with complex interface bundles consisting of multiple busses and control lines. The tool understands the type of each connection and won’t allow IP to be misconnected – leading to a sort of correct-by-construction design rule enforcement.

Finally, Vivado features an extensible IP catalog that helps organize and archive IP libraries. This appears to be on the way to something like an app store – only for FPGA IP. Having the catalog concept integrated into all of the tools from the ground up should keep the use and reuse of IP throughout the design cycle consistent and straightforward – something that the legacy tools never quite managed to accomplish.

In addition to IP reuse, of course, our future gigantic FPGA designs will require another critical element: More help. There won’t be many solo designers cranking out multi-million LUT designs. However, the old design tool suites were originally architected primarily with a single-designer workflow in mind. Vivado was conceived with team design at the forefront, with a robust set of features that help to enable multiple engineers to work on a design together. Incremental and modular design methodologies allow design tasks to be partitioned among multiple engineers, and the tools can handle the task of partially or incrementally implementing the design – all without imposing a heavy-handed strict methodology that might not fit with many companies’ established procedures. 

Again looking to the future – AutoESL high-level synthesis (HLS) technology has been improved and seamlessly integrated into the suite. As DSP and datapath-intensive designs become more popular and more demanding, high-level synthesis combined with the flexible parallel implementation capabilities of FPGAs will likely prove an unbeatable combination for producing ultra-fast, low-power datapaths for applications like embedded vision, radar, and other extreme sports of computation. High-level synthesis is a remarkable technology, and Xilinx promises to make it accessible to FPGA designers at a tiny fraction of the cost normally associated with high-level synthesis for ASIC design. The productivity and qualty-of-results benefits of HLS can be astounding, and they can dovetail well with the reasons design teams choose FPGAs in the first place. 

Of course, all of this sounds much too good to be true – and at the moment, it is. Xilinx is currently announcing availability of Vivado to early access customers only with a projected public release this summer. Xilinx will be maintaining ISE through the 7-series, and Vivado from the 7-series on, so you don’t need to jump on Vivado right away. Your trusty-old ISE will still be here for you for awhile. 

There is really no questioning the wisdom of Xilinx’s decision to take this daredevil leap. For the company to survive long-term and to be competitive with the kind of silicon that they’ll be able to deliver with the advanced processes at 28nm and beyond, they absolutely had to bite the bullet and do this kind of total remake of their tool suite. Maintaining, extending, and enhancing the old tools over a longer span would become something between impractical and impossible. This was absolutely necessary.

However, this kind of change comes at a steep price. I haven’t used the new tools yet, but there is absolutely no way that they will come out of the box as stable, robust, and reliable as the old tools. There is simply no known way to validate and stabilize this much software that fast. You may think that validating your FPGA or ASIC design before release is tricky business, but that’s just peanuts compared with the massive task of validating a monumental software system like Vivado. It is very likely that many customers’ first experiences with Vivado will not be pleasant. There will undoubtedly be the famed hype cycle’s “trough of disillusionment,” where users are frustrated with some aspect of Vivado that is absolutely critical to their design success at that moment. Xilinx’s success or failure will most likely depend on their ability and willingness to support customers through those trying times. 

The long-term results should be worth the trouble, however. Change-averse luddite design teams will eventually fall silent, and, if Xilinx has done their homework right the past five years and continues to keep themselves focused on the goal of supporting their customers for the next few (very rocky) ones, Vivado should turn into a competitive advantage and critical enabler for the company. It will be interesting to watch.

16 thoughts on “Kind Of A Big Deal”

  1. Xilinx just officially announced a monumental ground-up re-design of their entire tool suite. The new tools, “Vivado,” will eventually replace their current ISE tool suite.

    This was an EDA project of epic proportions… and will be for quite some time to come.

    What do you think?

  2. These are indeed massive projects, with all the stability and QoR issues involved. But they have to happen every once in a while, warts & all.

    Ah… I remember the days when Xilinx moved from Xact to ISE in 1996 or so. I was so happy that I didn’t actually need ISE at the time and could go on finishing my design with the old tools.

    I also remember when Altera introduced Quartus for their APEX family. It was horrible.

    During my tenure at Altera, right after the release of Quartus, the only way to get people using it initially was by challenging the customers to feed Quartus designs that would make it crash, just for fun, or for a corporate goodie. For those designers that _had_ to use Quartus for a real project, the only thing we could do was to try and find workarounds on a continuous basis and to release every quarter.

    Projects this size cannot be fed through the usual initial alpha and beta releases with the expectation that everything will work right after the public release. There’s just too many buttons and sliders that can be moved around.

    If Xilinx keeps an open mind, can motivate designers to try the tool out for real for at least one day (and collect feedback), keeps up a quick release schedule and isn’t discouraged by the knowledge that they will be publically fried by Vivado users for the next 18 months or so I think there’s something beautiful in the making.

  3. Thanks for tweaking my PTSD from the massive FPGA tool rewrite I experienced round about the turn of the century. I was managing the team largely tasked with testing the software during evenings and weekends. And supporting the customers after it was released. (Ben clearly remembers this too…)

    Excuse me while I go into a corner and shiver…

  4. Pingback: best seedbox
  5. Pingback: Togel Singapura
  6. Pingback: GVK Biosciences
  7. Pingback: friv 1
  8. Pingback: kari satilir
  9. Pingback: DMPK
  10. Pingback: bonuses
  11. Pingback: Bolide
  12. Pingback: bet
  13. Pingback: chimney repair

Leave a Reply

featured blogs
Oct 15, 2021
We will not let today's gray and wet weather in Fort Worth (home of Cadence's Pointwise team) put a damper on the week's CFD news which contains something from the highbrow to the... [[ Click on the title to access the full blog on the Cadence Community site. ...
Oct 13, 2021
How many times do you search the internet each day to track down for a nugget of knowhow or tidbit of trivia? Can you imagine a future without access to knowledge?...
Oct 13, 2021
High-Bandwidth Memory (HBM) interfaces prevent bottlenecks in online games, AI applications, and more; we explore design challenges and IP solutions for HBM3. The post HBM3 Will Feed the Growing Need for Speed appeared first on From Silicon To Software....
Oct 4, 2021
The latest version of Intel® Quartus® Prime software version 21.3 has been released. It introduces many new intuitive features and improvements that make it easier to design with Intel® FPGAs, including the new Intel® Agilex'„¢ FPGAs. These new features and improvements...

featured video

Digital Design Technology Symposium

Sponsored by Synopsys

Are you an SoC designer or manager facing new design challenges driven by rapidly growing and emerging vertical segments for HPC, 5G, mobile, automotive and AI applications?

Join us at the Digital Design Technology Symposium.

featured paper

Meet the risk-buster: How functional safety helps keep you safe

Sponsored by Texas Instruments

Whether it’s preventing systematic failures or anticipating and mitigating future risk, learn how functional safety works behind the scenes to help keep you and your electronics safe in Texas Instrument's latest company blog post.

Click to read more

featured chalk talk

Machine-Learning Optimized Chip Design -- Cadence Design Systems

Sponsored by Cadence Design Systems

New applications and technology are driving demand for even more compute and functionality in the devices we use every day. System on chip (SoC) designs are quickly migrating to new process nodes, and rapidly growing in size and complexity. In this episode of Chalk Talk, Amelia Dalton chats with Rod Metcalfe about how machine learning combined with distributed computing offers new capabilities to automate and scale RTL to GDS chip implementation flows, enabling design teams to support more, and increasingly complex, SoC projects.

Click here for more information about Cerebrus Intelligent Chip Explorer