feature article
Subscribe Now

Drag-and-Drop vs. HDL?

Earlier this year we looked at LabVIEW from National Instruments as a tool for developing FPGAs (click here). This week NI is announcing a number of extensions and enhancements to LabVIEW that are intended to make it even better as an FPGA development environment.

As a reminder, LabVIEW is a wide-ranging graphical programming environment. Ignoring the vast data acquisition, instrument control and industrial automation applications (and Lego Mindstorms NXT, which is shipped with a subset of LabVIEW), LabVIEW is a good environment for creating FPGA designs: it is a drag-and-drop environment, and, unlike hardware description languages, it is inherently parallel. In 2009, NI moved to an annual programme of new versions of LabVIEW, releasing beta versions at the beginning of the year and making the final release at NI Week, so this week’s announcements are for LabVIEW 2010.

LabVIEW 2010 has some interesting features for members of both of the communities that NI sees as its main customers: the existing FPGA specialists who want to get FPGAs in production more efficiently and the application domain experts who see FPGAs as a route for easily implementing projects within their domain. The release includes extended IP input, a fast simulation facility, new options for compilation, and the beta testing of a new route to system integration.

The drag-and-drop environment now extends the existing access to Xilinx IP by providing much closer coupling with the Xilinx Core Generator tool. This, coupled with NI’s own IP for FPGAs, provides a very flexible set of tools, particularly for the application domain specialist. The more experienced designer can add VHDL modules, which can be stored as local IP for re-use.

The output from the design, previously only in NI’s own G language, can now also be as DLL files to be used both for simulation and compilation. NI has its own cycle-accurate simulator, primarily to confirm that the different elements of IP are communicating as expected. DLL files can be exported to other simulators, such as ModelSim.

With large FPGAs, compilation is never going to be fast. Until now, compilation required that the desk-top machine used for design was devoted to compiling for long periods. NI is now providing an option to off-load the compilation onto a server, potentially with multiple processors and large quantities of memory. The server can be within the same organisation, if it already exists or if the designer’s organisation wants to invest in the capital expenditure and expertise required for building and maintaining servers. Otherwise, enter the cloud! Using the Internet as a transport mechanism, it is possible to use time on machines situated elsewhere as easily as if they were in the next room. This is already happening with a range of other applications, from Google Docs to running payrolls.

NI is feeling its way forward with this at the moment, offering access to its own specialist servers over the net, in what is effectively a beta test. In time, the company thinks that there may be an opportunity for third parties to provide this service, but it is not yet certain how a commercial model will work.

Offloading the compilation may not cause a speed-up in compilation time: this will be determined by the platform processor clock and the memory available on the server. However it is not unreasonable to think that compilation is going to be a good target for parallel processing. Parallelising the compilation and place and route process to run on multiple processors (as is happening with main-stream EDA tools) should produce significantly faster turn-around. This is only speculation, but surely Xilinx must be looking at this.

What offloading the compilation can do, however, is to make it possible to carry out multiple compilations in parallel. For example, it will be possible to compare the effects of optimising a design for speed, for power, or for silicon area. Another possibility will be to try different ways of implementing parts of the design. Would a DSP be a better option?  Is throughput going to be increased by using four channels here? While this has been a theoretical option in the past, designing alternatives through the drag-and-drop interface is much faster than implementing them in an HDL. Adding the ability to run different compilations in parallel and comparing the results makes it possible to easily evaluate different options before creating an optimal design.

Staying within the NI environment, the compiled code can be loaded into FPGAs mounted in a range of boards in the RIO and related families. These are reconfigurable boards, so it is possible to return to the LabVIEW screen to create the system surrounding the FPGA, adding IO and other peripherals. For one-offs and low-volume applications this may be enough, but for applications that are to be shipped in volume, while RIO and its variants provide a good route for prototyping, they are not cost-effective for volume production.

But volume production requires the design of a new board, with the FPGA and all the other elements needed to make FPGA design into a system. But doing this, however, requires a whole new set of skills. NI has a services arm that works with customers on creating boards, based on the in-house experience of building the hardware for RIO/Compact Rio and other systems.  At the moment, this is still a very limited offering to a small group of customers, but NI is looking at ways in which the knowledge can be formalised and turned into a product. This is pretty long-term thinking but another example of the way in which NI’s philosophy is the system, not the FPGA.

Board design tools are clearly aimed at making the life of the heavy-user/skilled-engineer a little easier. But if the needs can be served by one of the stock cards that NI produces, then an application domain expert can get a product up and running pretty quickly, with only a limited amount of training.

If you are an HDL designer, it is difficult to accept that your hard-won experience and battlefield skills might be replaced by a drag-and-drop interface. But creating the FPGA is not the objective: creating the end product is what a project is all about. Writing good VHDL requires fine skills and intellectual understanding of electronics, but so did entering schematics a few years ago. It is possible that in only a few more years writing VHDL will seem similar to being good at sudoko: impressive but not a great deal of practical use.

Leave a Reply

featured blogs
May 26, 2022
Introducing Synopsys Learning Center, an online, on-demand library of self-paced training modules, webinars, and labs designed for both new & experienced users. The post New Synopsys Learning Center Makes Training Easier and More Accessible appeared first on From Silico...
May 26, 2022
CadenceLIVE Silicon Valley is back as an in-person event for 2022, in the Santa Clara Convention Center as usual. The event will take place on Wednesday, June 8 and Thursday, June 9. Vaccination You... ...
May 25, 2022
There are so many cool STEM (science, technology, engineering, and math) toys available these days, and I want them all!...
May 24, 2022
By Neel Natekar Radio frequency (RF) circuitry is an essential component of many of the critical applications we now rely… ...

featured video

EdgeQ Creates Big Connections with a Small Chip

Sponsored by Cadence Design Systems

Find out how EdgeQ delivered the world’s first 5G base station on a chip using Cadence’s logic simulation, digital implementation, timing and power signoff, synthesis, and physical verification signoff tools.

Click here for more information

featured paper

Intel Agilex FPGAs Deliver Game-Changing Flexibility & Agility for the Data-Centric World

Sponsored by Intel

The new Intel® Agilex™ FPGA is more than the latest programmable logic offering—it brings together revolutionary innovation in multiple areas of Intel technology leadership to create new opportunities to derive value and meaning from this transformation from edge to data center. Want to know more? Start with this white paper.

Click to read more

featured chalk talk

Medical Grade Temperature Sensing with the World's Smallest Surface Mount FIR Temperature IC

Sponsored by Mouser Electronics and Melexis

Temperature sensing has come a very long way in recent years. In this episode of Chalk Talk, Amelia Dalton chats with Doug Gates from Melexis about the latest innovation in medical grade temperature sensing. They take a closer look at the different kinds of applications that can use this kind of sensing technology, the role that emissivity and field view play in temperature sensing, and what sets the Melexis’ MLX90632 apart from other temperature sending solutions on the market today. 

Click here for more information about Melexis MLX90632 Infrared Temperature Sensors