feature article
Subscribe Now

Drag-and-Drop vs. HDL?

Earlier this year we looked at LabVIEW from National Instruments as a tool for developing FPGAs (click here). This week NI is announcing a number of extensions and enhancements to LabVIEW that are intended to make it even better as an FPGA development environment.

As a reminder, LabVIEW is a wide-ranging graphical programming environment. Ignoring the vast data acquisition, instrument control and industrial automation applications (and Lego Mindstorms NXT, which is shipped with a subset of LabVIEW), LabVIEW is a good environment for creating FPGA designs: it is a drag-and-drop environment, and, unlike hardware description languages, it is inherently parallel. In 2009, NI moved to an annual programme of new versions of LabVIEW, releasing beta versions at the beginning of the year and making the final release at NI Week, so this week’s announcements are for LabVIEW 2010.

LabVIEW 2010 has some interesting features for members of both of the communities that NI sees as its main customers: the existing FPGA specialists who want to get FPGAs in production more efficiently and the application domain experts who see FPGAs as a route for easily implementing projects within their domain. The release includes extended IP input, a fast simulation facility, new options for compilation, and the beta testing of a new route to system integration.

The drag-and-drop environment now extends the existing access to Xilinx IP by providing much closer coupling with the Xilinx Core Generator tool. This, coupled with NI’s own IP for FPGAs, provides a very flexible set of tools, particularly for the application domain specialist. The more experienced designer can add VHDL modules, which can be stored as local IP for re-use.

The output from the design, previously only in NI’s own G language, can now also be as DLL files to be used both for simulation and compilation. NI has its own cycle-accurate simulator, primarily to confirm that the different elements of IP are communicating as expected. DLL files can be exported to other simulators, such as ModelSim.

With large FPGAs, compilation is never going to be fast. Until now, compilation required that the desk-top machine used for design was devoted to compiling for long periods. NI is now providing an option to off-load the compilation onto a server, potentially with multiple processors and large quantities of memory. The server can be within the same organisation, if it already exists or if the designer’s organisation wants to invest in the capital expenditure and expertise required for building and maintaining servers. Otherwise, enter the cloud! Using the Internet as a transport mechanism, it is possible to use time on machines situated elsewhere as easily as if they were in the next room. This is already happening with a range of other applications, from Google Docs to running payrolls.

NI is feeling its way forward with this at the moment, offering access to its own specialist servers over the net, in what is effectively a beta test. In time, the company thinks that there may be an opportunity for third parties to provide this service, but it is not yet certain how a commercial model will work.

Offloading the compilation may not cause a speed-up in compilation time: this will be determined by the platform processor clock and the memory available on the server. However it is not unreasonable to think that compilation is going to be a good target for parallel processing. Parallelising the compilation and place and route process to run on multiple processors (as is happening with main-stream EDA tools) should produce significantly faster turn-around. This is only speculation, but surely Xilinx must be looking at this.

What offloading the compilation can do, however, is to make it possible to carry out multiple compilations in parallel. For example, it will be possible to compare the effects of optimising a design for speed, for power, or for silicon area. Another possibility will be to try different ways of implementing parts of the design. Would a DSP be a better option?  Is throughput going to be increased by using four channels here? While this has been a theoretical option in the past, designing alternatives through the drag-and-drop interface is much faster than implementing them in an HDL. Adding the ability to run different compilations in parallel and comparing the results makes it possible to easily evaluate different options before creating an optimal design.

Staying within the NI environment, the compiled code can be loaded into FPGAs mounted in a range of boards in the RIO and related families. These are reconfigurable boards, so it is possible to return to the LabVIEW screen to create the system surrounding the FPGA, adding IO and other peripherals. For one-offs and low-volume applications this may be enough, but for applications that are to be shipped in volume, while RIO and its variants provide a good route for prototyping, they are not cost-effective for volume production.

But volume production requires the design of a new board, with the FPGA and all the other elements needed to make FPGA design into a system. But doing this, however, requires a whole new set of skills. NI has a services arm that works with customers on creating boards, based on the in-house experience of building the hardware for RIO/Compact Rio and other systems.  At the moment, this is still a very limited offering to a small group of customers, but NI is looking at ways in which the knowledge can be formalised and turned into a product. This is pretty long-term thinking but another example of the way in which NI’s philosophy is the system, not the FPGA.

Board design tools are clearly aimed at making the life of the heavy-user/skilled-engineer a little easier. But if the needs can be served by one of the stock cards that NI produces, then an application domain expert can get a product up and running pretty quickly, with only a limited amount of training.

If you are an HDL designer, it is difficult to accept that your hard-won experience and battlefield skills might be replaced by a drag-and-drop interface. But creating the FPGA is not the objective: creating the end product is what a project is all about. Writing good VHDL requires fine skills and intellectual understanding of electronics, but so did entering schematics a few years ago. It is possible that in only a few more years writing VHDL will seem similar to being good at sudoko: impressive but not a great deal of practical use.

Leave a Reply

featured blogs
Jul 20, 2024
If you are looking for great technology-related reads, here are some offerings that I cannot recommend highly enough....

featured video

How NV5, NVIDIA, and Cadence Collaboration Optimizes Data Center Efficiency, Performance, and Reliability

Sponsored by Cadence Design Systems

Deploying data centers with AI high-density workloads and ensuring they are capable for anticipated power trends requires insight. Creating a digital twin using the Cadence Reality Digital Twin Platform helped plan the deployment of current workloads and future-proof the investment. Learn about the collaboration between NV5, NVIDIA, and Cadence to optimize data center efficiency, performance, and reliability. 

Click here for more information about Cadence Data Center Solutions

featured chalk talk

Versatile S32G3 Processors for Automotive and Beyond
In this episode of Chalk Talk, Amelia Dalton and Brian Carlson from NXP investigate NXP’s S32G3 vehicle network processors that combine ASIL D safety, hardware security, high-performance real-time and application processing and network acceleration. They explore how these processors support many vehicle needs simultaneously, the specific benefits they bring to autonomous drive and ADAS applications, and how you can get started developing with these processors today.
Jul 24, 2024
972 views