feature article
Subscribe Now

Drag-and-Drop vs. HDL?

Earlier this year we looked at LabVIEW from National Instruments as a tool for developing FPGAs (click here). This week NI is announcing a number of extensions and enhancements to LabVIEW that are intended to make it even better as an FPGA development environment.

As a reminder, LabVIEW is a wide-ranging graphical programming environment. Ignoring the vast data acquisition, instrument control and industrial automation applications (and Lego Mindstorms NXT, which is shipped with a subset of LabVIEW), LabVIEW is a good environment for creating FPGA designs: it is a drag-and-drop environment, and, unlike hardware description languages, it is inherently parallel. In 2009, NI moved to an annual programme of new versions of LabVIEW, releasing beta versions at the beginning of the year and making the final release at NI Week, so this week’s announcements are for LabVIEW 2010.

LabVIEW 2010 has some interesting features for members of both of the communities that NI sees as its main customers: the existing FPGA specialists who want to get FPGAs in production more efficiently and the application domain experts who see FPGAs as a route for easily implementing projects within their domain. The release includes extended IP input, a fast simulation facility, new options for compilation, and the beta testing of a new route to system integration.

The drag-and-drop environment now extends the existing access to Xilinx IP by providing much closer coupling with the Xilinx Core Generator tool. This, coupled with NI’s own IP for FPGAs, provides a very flexible set of tools, particularly for the application domain specialist. The more experienced designer can add VHDL modules, which can be stored as local IP for re-use.

The output from the design, previously only in NI’s own G language, can now also be as DLL files to be used both for simulation and compilation. NI has its own cycle-accurate simulator, primarily to confirm that the different elements of IP are communicating as expected. DLL files can be exported to other simulators, such as ModelSim.

With large FPGAs, compilation is never going to be fast. Until now, compilation required that the desk-top machine used for design was devoted to compiling for long periods. NI is now providing an option to off-load the compilation onto a server, potentially with multiple processors and large quantities of memory. The server can be within the same organisation, if it already exists or if the designer’s organisation wants to invest in the capital expenditure and expertise required for building and maintaining servers. Otherwise, enter the cloud! Using the Internet as a transport mechanism, it is possible to use time on machines situated elsewhere as easily as if they were in the next room. This is already happening with a range of other applications, from Google Docs to running payrolls.

NI is feeling its way forward with this at the moment, offering access to its own specialist servers over the net, in what is effectively a beta test. In time, the company thinks that there may be an opportunity for third parties to provide this service, but it is not yet certain how a commercial model will work.

Offloading the compilation may not cause a speed-up in compilation time: this will be determined by the platform processor clock and the memory available on the server. However it is not unreasonable to think that compilation is going to be a good target for parallel processing. Parallelising the compilation and place and route process to run on multiple processors (as is happening with main-stream EDA tools) should produce significantly faster turn-around. This is only speculation, but surely Xilinx must be looking at this.

What offloading the compilation can do, however, is to make it possible to carry out multiple compilations in parallel. For example, it will be possible to compare the effects of optimising a design for speed, for power, or for silicon area. Another possibility will be to try different ways of implementing parts of the design. Would a DSP be a better option?  Is throughput going to be increased by using four channels here? While this has been a theoretical option in the past, designing alternatives through the drag-and-drop interface is much faster than implementing them in an HDL. Adding the ability to run different compilations in parallel and comparing the results makes it possible to easily evaluate different options before creating an optimal design.

Staying within the NI environment, the compiled code can be loaded into FPGAs mounted in a range of boards in the RIO and related families. These are reconfigurable boards, so it is possible to return to the LabVIEW screen to create the system surrounding the FPGA, adding IO and other peripherals. For one-offs and low-volume applications this may be enough, but for applications that are to be shipped in volume, while RIO and its variants provide a good route for prototyping, they are not cost-effective for volume production.

But volume production requires the design of a new board, with the FPGA and all the other elements needed to make FPGA design into a system. But doing this, however, requires a whole new set of skills. NI has a services arm that works with customers on creating boards, based on the in-house experience of building the hardware for RIO/Compact Rio and other systems.  At the moment, this is still a very limited offering to a small group of customers, but NI is looking at ways in which the knowledge can be formalised and turned into a product. This is pretty long-term thinking but another example of the way in which NI’s philosophy is the system, not the FPGA.

Board design tools are clearly aimed at making the life of the heavy-user/skilled-engineer a little easier. But if the needs can be served by one of the stock cards that NI produces, then an application domain expert can get a product up and running pretty quickly, with only a limited amount of training.

If you are an HDL designer, it is difficult to accept that your hard-won experience and battlefield skills might be replaced by a drag-and-drop interface. But creating the FPGA is not the objective: creating the end product is what a project is all about. Writing good VHDL requires fine skills and intellectual understanding of electronics, but so did entering schematics a few years ago. It is possible that in only a few more years writing VHDL will seem similar to being good at sudoko: impressive but not a great deal of practical use.

Leave a Reply

featured blogs
Apr 16, 2024
In today's semiconductor era, every minute, you always look for the opportunity to enhance your skills and learning growth and want to keep up to date with the technology. This could mean you would also like to get hold of the small concepts behind the complex chip desig...
Apr 11, 2024
See how Achronix used our physical verification tools to accelerate the SoC design and verification flow, boosting chip design productivity w/ cloud-based EDA.The post Achronix Achieves 5X Faster Physical Verification for Full SoC Within Budget with Synopsys Cloud appeared ...
Mar 30, 2024
Join me on a brief stream-of-consciousness tour to see what it's like to live inside (what I laughingly call) my mind...

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured chalk talk

PolarFire® SoC FPGAs: Integrate Linux® in Your Edge Nodes
Sponsored by Mouser Electronics and Microchip
In this episode of Chalk Talk, Amelia Dalton and Diptesh Nandi from Microchip examine the benefits of PolarFire SoC FPGAs for edge computing applications. They explore how the RISC-V-based Architecture, asymmetrical multi-processing, and Linux-based reference solutions make these SoC FPGAs a game changer for edge computing applications.
Feb 6, 2024
9,524 views